Changing capital aggregation

The Rad Geek People’s Daily has an interesting comment on my post Capitalists vs. Entrepreneurs:
The only thing that I would want to add here is that it’s not just a matter of projects being able to expand or sustain themselves with little capital (although that is a factor). It’s also a matter of the way in which both emerging distributed technologies in general, and peer production projects in particular, facilitate the aggregation of dispersed capital — without it having to pass through a single capitalist chokepoint, like a commercial bank or a venture capital fund. Because of the way that peer production projects distribute and amortize their costs of operation, entrepreneurs can afford to bypass existing financial operators and go directly to people with $20 or $50 to give away and take the money in in small donations, because they no longer need to get multimillion dollar cash infusions all at once just to keep themselves running: the peer production model allows greater flexibility by dispersing fixed costs among many peers (and allowing new entrepreneurs to easily step in and take over the project, if one has to bow out due to the pressures imposed by fixed costs), rather than by concentrating them into the bottom line of a single, precarious legal entity. Meanwhile, because of the way that peer production projects distribute their labor, peer-production entrepreneurs can also take advantage of “spare cycles” on existing, widely-distributed capital goods — tools like computers, facilities like offices and houses, software, etc. which contributors own, which they still would have owned personally or professionally whether or not they were contributing to the peer production project, and which can be put to use as a direct contribution of a small amount of fractional shares of capital goods directly to the peer production project. So it’s not just a matter of cutting total aggregate costs for capital goods (although that’s an important element); it’s also, importantly, a matter of new models of aggregating the capital goods to meet whatever costs you may have, so that small bits of available capital can be rounded up without the intervention of money-men and other intermediaries.
I like the point about the improved aggregation of capital, that is quite possibly as important as the reduction in capital requirements.

This gets me thinking more about the importance of reduced coordination costs (aggregation of capital being a special case). Clearly computers and networks contribute enormously to improving the productivity of coordination. However I don’t think we have good models for the costs of coordination, or the effects of improved coordination, so its importance tends to be underestimated.

I guess cheaper / easier / faster coordination is a special case of (what economists call) “technology” — changes with big economic impact, that are outside the economic model. From another point of view, coordination costs and delays are a special case of (what economists call) “frictions” which are also hard for them to model well. So trends in coordination may well affect the economy in major ways, but are none the less mostly invisible in economic models.

And in fact we’d expect coordination costs to fall and speed and capacity to rise on an exponential trend, riding Moore’s law. Given that changes in coordination have significant economic impact (how could they not?) there’s a huge long term economic trend that’s formally invisible to economists.

One aspect of this that’s perhaps subtle, but often important, and that shows up strongly in capital aggregation, is how “technology” changes the risk of cooperation. One of the big sources of risk is that someone you’re cooperating with will “defect” (in the prisoner’s dilemma sense) — that the cooperative situation will give them ways to benefit by hurting you. In fund raising this risk is obvious, but working with people, letting others use your “spare cycles”, etc. have risks too. Ways of coordinating have been evolving to counteract or at least mitigate these risks — clearer norms for response to bad actors, online reputations, large scale community reactions to serious bad behavior, and so forth. Wikipedia’s mechanisms for quick reversion of vandalism is an example. Even spam filtering is a case in point, reducing our costs and the benefits to the bad actors. It is too early to know for sure, but right now there are enough success stories that I’d guess that the space of defensible and sustainable cooperation is pretty big — maybe even big enough to “embrace and extend” important parts of the current economy.

Turking! The idea and some implications

I recently read an edited collection of five stories, Metatropolis; the stories are set in a common world the authors developed together. This is a near future in which nation state authority has eroded and in which new social processes have grown up and have a big role in making things work. In some sense the theme of the book was naming and exploring those new processes.

One of those processes was turking. The term is based on Amazon’s Mechanical Turk. Google shows the term in use as far back as 2005 but I hadn’t really wrapped my mind around the implications; Metatropolis broadens the idea way beyond Amazon’s implementation or any of the other discussions I’ve read.

Turking: Getting bigger jobs done by semi-automatically splitting them up into large numbers of micro-jobs (often five minutes long or less), and then automatically aggregating and cross-checking the results. The turkers (people doing the micro-jobs) typically don’t have or need any long term or contractual relationship with the turking organizers. In many cases, possibly a majority, the turkers aren’t paid in cash, and often they aren’t paid at all, but do the tasks as volunteers or because they are intrinsically rewarding (as in games).

One key element that is distinctive to turking is some sort of entirely or largely automated process for checking the results — usually by giving multiple turkers the same task and comparing their results. Turkers who screw up too many tasks aren’t given more of those tasks. Contrast this with industrial employment where the “employer” filters job candidates, contracts with some to become “employees”, and then enforces their contracts. The relationship in turking is very different: the “employer” lets anybody become an “employee” and do some tasks, doesn’t (and can’t) control whether or how the “employee” does the work, but measures each “employee’s” results and decides whether and how to continue with the relationship.

This is an example of a very consistent pattern in the transition from industrial to networked relationships: a movement from gatekeeping and control to post hoc filtering. Another example is academic publishing. The (still dominant) industrial model of publishing works through gatekeeping — articles and books don’t get published until they are approved through peer review. The networked model works through post hoc processes: papers go up on web sites, get read, commented and reviewed, often are revised, and over time get positioned on a spectrum from valid/valuable to invalid/worthless. The networked model is inexorably taking over, because it is immensely faster, often fairer (getting a few bad anonymous reviews can’t kill a good paper), results in a wider range of better feedback to authors, etc.

It seems quite possible — even likely — that post hoc filtering for work will produce substantially better results than industrial style gatekeeping and control in most cases. In addition to having lower transaction costs, it could produce better quality, a better fit between worker and task, and less wasted effort. It also, of course, will change how much the results cost and how much people get paid — more on that below.

Amazon’s Mechanical Turk just involves information processing — web input, web output — and this is typical of most turking today. However there are examples which involve real world activities. In an extreme case turking could be used to carry out terrorist acts, maybe without even doing anything criminal — Bruce Sterling has some stories that explore this possibility. But there are lots of ordinary examples, like counting the empty parking spaces on a given block, or taking a package and shipping it.

Examples

  • Refugees in camps are turking for money. The tasks are typical turking tasks, but the structure seems to be some more standard employment relationship. If there were enough computers, I bet a high percentage of the camp residents would participate, after some short period in which everyone learned from each other how to do the work. Then the organizers would have to shift to turking methods because the overhead of managing hundreds of thousands of participants using contracting and control would be prohibitive.
  • A game called FoldIt is using turking to improve machine solutions to protein folding. Turns out humans greatly improve on the automatic results but need the machine to do the more routine work. The turkers have a wide range of skill and a variety of complementary strategies, so the project benefits from letting a many people try and then keeping the ones who succeed. (This is an example where the quality is probably higher than an industrial style academic model could generate.) The rewards are the intrinsic pleasure of playing the game, and also maybe higher game rankings.
  • There’s a startup named CrowdFlower that aims to make a business out of turkingrowdFlower has relationships with online games that include the turking in their game play. So the gamers get virtual rewards (status, loot). I can easily imagine that the right turking tasks would actually enhance game play. CrowdFlower are also doing more or less traditional social science studies of turking motivations etc. Of course the surveys that generate data for the research are also a form of turking.
  • Distributed proofreading. OCR’d texts are distributed to volunteers and the volunteers check and correct the OCR. (They get both the image and the text.) The front page goes out of its way to note that “there is no commitment expected on this site beyond the understanding that you do your best.” This is an early turking technology, and works in fairly large chunks, a page at a time. It may be replaced by a much finer grained technology that works a word at a time — see below.
  • Peer production (open source and open content). An important component of peer production is small increments of bug reporting, testing, code review, documentation editing, etc. Wikipedia also depends on a lot of small content updates, edits, typo fixes, etc. These processes have the same type of structure as turking, although they typically hasn’t been called turking. The main difference from the other examples is there’s no clear-cut infrastructure for checking the validity of changes. This is at least partly historical, these processes arose before the current form of turking was worked out. The incentive — beyond altruism and the itch to correct errors — is that one can get credit in the community and maybe even in the product.
  • I just recently came across another good example that deserves a longer explanation: ReCaptcha. It is cool because it takes two bad things, and converts them into two good things, using work people were doing anyway.

    The first bad thing is that OCR generates lots of errors, especially on poorly printed or scanned material — which is why the distributed proofreading process above is required. These can often be identified because the results are misspelled and/or the OCR algorithm reports low confidence. From the OCR failures, you can generate little images that OCR has trouble recognizing correctly.

    The second bad thing is that online services are often exploited by bad actors who use robots to post spam, abusively download data, etc. Often this is prevented by captchas, images that humans can convert into text, but that are hard for machines to recognize. Since OCR failures are known to be hard for machines to recognize correctly, they make good captchas.

    Recaptcha turns the user effort applied to solving captchas, which would otherwise be wasted, into turking to complete the OCR — essentially very fine grained distributed proofreading. Recaptcha figures out who’s giving the correct answers by having each user recognize both a known word and an unknown word, in addition to comparing answers by different users. Users are rewarded by getting the access they wanted.

    Note that if spammers turk out captcha reading (which they are doing, but which increases their costs significantly) then they are indirectly paying for useful work as well. Potentially Recaptcha could be generalized to any kind of simple pattern recognition that’s relatively easy for humans and hard for machines, which could generate a lot of value from human cognitive capacities.

    Some implications

    It seems that over time a huge variety and quantity of work could be turked. The turking model has the capacity to productively employ a lot of what Clay Shirky calls our “cognitive surplus”, and also whatever time surplus we have. Many unemployed people, refugee populations and I’m sure lots of other groups have a lot of surplus. As Shirky points out, even employed people have a discretionary surplus that they spend watching TV, reading magazines, playing computer games, etc. However right now there’s no way to bring this surplus to market.

    Switching from industrial relationships (heavyweight, gatekeeping and control) to networked relationships (lightweight, post hoc filtering) reduces per task transaction costs to a tiny fraction of their current level, and makes it feasible to bring much of this surplus to market.

    The flip side of that of course is that the more this surplus is available for production, the less anyone will get paid for the work it can do. Already in a lot of existing turking, the participants aren’t getting paid — and in many cases the organizers aren’t getting paid either. Also, more or less by definition, the surplus that would be applied to turking currently isn’t being used for any other paid activity, so potential workers aren’t giving up other pay to turk. Therefore, I expect the average payment for a turked task to approach zero, for both turkers and organizers. Usually there will still be rewards, but they will tend to be locally generated within the specific context of the tasks (online community, game, captcha, whatever). Often the entity that generates the rewards won’t won’t get any specific benefit from the turking — for example, in the case of ReCaptcha, the sites that use it don’t particularly benefit from whatever proofreading gets done.

    Mostly turking rewards won’t be measurable in classical monetary terms — in some cases rewards may involve “in game” currency but this doesn’t yet count in the larger economy. In classical monetary terms, the marginal cost of getting a job turked will probably approach the cost of building, maintaining and running the turking infrastructure — and that cost is exponentially declining and will continue to do so for decades.

    This trend suggests that we need to find some metric complementary to money to aggregate preferences and allocate large scale social effort. But I’m not going to pursue that question further here.

    Obviously it will be important to understand what types of work can be turked and what can’t. For example, could the construction of new houses be turked? That may seem like a stretch, but Habitat for Humanity and other volunteer groups do construct houses with a process very much like turking — and of course this has a long history in the US, with institutions like barn raising. Furthermore the use of day labor isn’t that different from turking. I’d guess that within ten years we’ll be turking much of the construction of quite complex buildings. It is interesting to try to imagine what this implies for construction employment.

    Realistically, at this point we just don’t know the limits of turking. My guess is that the range of things that can be done via turking will turn out to be extremely broad, but that it will take a lot of specific innovations to grow into that range. Also of course there will be institutional resistance to turking many activities.

    When a swarm of turkers washes over any given activity and devours most of it, there will typically be a bunch of nuggets left over that can’t be turked reliably. These will probably be things that require substantial specialized training and/or experience, relatively deep knowledge of the particular circumstances, and maybe certification and accountability. Right now those nuggets are embedded in turkable work and so it is hard or impossible to figure out their distribution, relative size, etc. For a while (maybe twenty years or so) we’ll keep being surprised — we’ll find some type of nuggets we think can’t be turked, and then someone will invent a way to make most of them turkable. Only if and when turking converges on a stable institution will we be able to state more analytically and confidently the characteristics that make a task un-turkable.

    Another issue is security / confidentiality. Right now, corporations are willing to use turking for lots of tasks, but I bet they wouldn’t turk tasks involving key market data, strategic planning, or other sensitive material. On the other hand, peer production projects are willing to turk almost anything, because they don’t have concerns about maintaining a competitive advantage by keeping secrets. (They do of course have to keep some customer data private if they collect it at all, but usually they just avoid recording personal details.) I’d guess that over time this will give entities that keep fewer secrets a competitive advantage. I think this is already the case for a lot of related reasons because broadly speaking “Trying to keep secrets imposes huge transaction costs.” Eventually keeping big secrets may come to be seen as an absurdly expensive and dubious proposition and the ability to keep big pointless secrets will become an assertion of wealth and power. (Every entity will need to keep a few small secrets, such as the root password to their servers. But we know how to safely give everyone read only access to almost everything, and still limit changes to those who have those small secrets.)

    There’s lots more to say, but that’s enough for now.

    The two faces of Avatar

    Avatar just keeps demanding a bit more analysis.

    To recap, I agree the story is an embarrassingly naive retread of the “white man goes native and saves the natives” plus gooey nature worship. But…

    I also believe the world Pandora, as shown to us in the movie, can’t be confined within that story. It keeps escaping and cutting across or contradicting the premises of the narrative, as discussed in many nerd posts including mine.

    So Avatar has two very different faces, and different personalities to go with them. And I think this goes back to the basic character of the social processes used to create Avatar. No, seriously, stay with me for a minute and I’ll explain.

    For our purposes we can say there are three modes of production in films and a lot of other activities: craft, industrial, and networked. Of course any real film is produced through a mix of these.

    A film made by a small team working on their own (with or without a presiding genius) is an example of craft production, just like similar teams producing ceramic tiles or houses.

    A film made in a “factory” environment along with many others (like The Wizard of Oz or Casablanca) is an example of industrial production.

    And a film made by multiple loosely coordinated groups with different expertise is an example of network production.

    Network production is now nearly universal in large films, but before Avatar I can’t think of any examples of network production driving the film content. Generally the network mostly fleshes out content dictated by a small team that is using craft production. (If you can think of good previous examples, please comment or email, I’d really like to know.)

    In Avatar, Cameron wanted a lot of depth in his world, and had the money and skills to pull together a network to produce it. Pandora was created by a huge collaboration between ecologists, biologists, linguists, artists, rendering experts and so forth. The collaboration also necessarily included software and hardware experts who built the computer networks, and project managers who shaped the social network, and these people were no doubt also very engaged with the ideas about Pandora and contributed to its character in significant ways. Cameron was of course involved, but the depth and complexity of the world (and the network) meant that most of the decisions had to be internal to the network.

    So Avatar inevitably has two faces. The plot arc, the characters and the dialog were crafted by Cameron. Much of the commercial success of the film no doubt is due to his judgements about what would work in that domain. But Pandora, and probably much of the human tech in the film was created by a social network that was focused on scientific (as well as artistic) verisimilitude, conceptual integrity across a wide range of disciplines and scales, and our best current skills for designing and managing big networks of people and machines. And a significant amount of the success of the film is due to the richness and coherence of the vision generated by the network.

    In some sense Cameron was responsible for both faces. In one case he was directly shaping the content. In the other, he was shaping and directing the social network that produced the content. But the two forms of production generate very different kinds of results, and those generate the divergent critical reactions that tend to focus either on the story or on the world.

    This analysis brings into focus a question on which I have no information, but which I think is important to our deeper understanding of Avatar and our thinking about the successors it will inevitably inspire. Who defined the parts of the world that bridge between the network and the story? For example, in Pandora, animals, Na’vi and trees can couple their nervous systems to each other. This coupling plays a role in the story, but it could have been avoided in some cases, and made less explicit and more “magical” in others. On the other hand this coupling mechanism is constitutive of key parts of Pandora such as the “world brain”, and it drastically affects our understanding of the nature of Pandora and its possible history.

    Did the network come up with this coupling as a way to make mind transfer — a part of the story that would otherwise have been magic — into science? Or was it somehow integral to Cameron’s vision of Pandora? Or — more likely — some combination of those.

    Untrustworthy by design? or incompetence? Just untrustworthy

    James Kwak has an interesting post Design or Incompetence? in which he discusses the ways banks are delaying and obstructing customer efforts to get benefits the banks have offered. In addition the banks are misrepresenting their own actions and obligations. As the title suggests he wonders whether this behavior is due to design or incompetence, and ultimately concludes (after some discussion of the internal institutional issues) that it doesn’t matter because it is a systemic consequence of the banks’ incentives.

    As in my previous post on “Lying or Stupid?”, I’d say this analysis is interesting and sometimes useful but that we should start by saying the institutions involved are untrustworthy which is true either way, and often we don’t need to look into the finer distinctions. Debating the details usually just gives these bad actors ways to muddy the water.

    More generally, we need to develop social sanctions — applied by governments, broad condemnation, boycott, and/or whatever else will work — to “adjust the incentives” of these bad actors so they either become trustworthy or are replaced by organizations that are trustworthy. These social sanctions worked with apartheid and to some extent with third world sweatshops, we can at least imagine them working with respect to untrustworthiness.

    Right now unfortunately there are many who argue that not only should corporations ignore this sort of issue, but even further that it would be immoral for them to take such considerations into account. Furthermore the general perception is that we can’t expect corporations to care about morality, as Roger Lowenstein discusses. I’ve been chewing on this issue in the comments to a couple of interesting posts by Timothy Lee and plan to summarize my resulting thoughts here soon. The good news from that discussion is that even some committed free market folks such as Timothy agree that we need to have corporations put moral obligations such as trustworthiness above profits. Now we need rough consensus on that…

    Interpreting Avatar

    Bloggers who I greatly respect feel Avatar is just another Dances with Wolves — a way of putting a romantic gloss on native authenticity and then appropriating it by having a “white man” out-native the natives. So I want to think a bit about where I agree and disagree with this position.

    We could adopt a bunch of interpretations of Avatar — or some combination of them:

    1. Cameron wanted to make a big movie that would advance his career. He picked 3D CGI, the rest was more or less inevitable as “engineering decisions” to optimize his objective function
    2. Cameron had some goals that included endorsing fairly naive political messages (respect for earth, etc.). He hired good people to invent a cool ecology without worrying about the backstory, and then just pasted his agenda on top of that
    3. Cameron had something like the posthuman interpretation in mind, but since he knows what sells, he drenched it in sugar syrup to make it palatable.
    4. The internal logic of the story pulls it into a posthuman shape, and Cameron, however he started, saw he couldn’t fight that and so went with it.

    But we don’t have to just guess about which of these is correct. After Titanic, Cameron wrote a “114 page scriptment… known at the time as Project 880″ (apparently a “scriptment” is a preliminary version of a movie script, but in this case much more complete than the movie as shot). Based on an extended description the scriptment was a much more detailed version of Avatar, with pretty much the same focus and a lot more explicit back story. Most of the changes from Project 880 to Avatar as shot are trimming and making the action more obvious.

    And Project 880 supports the “naive messages” interpretation, but also is fairly consistent with the “internal logic” interpretation.

    There are a few touches in Project 880 that show Cameron had a sense of the the posthuman logic of the story. When the humans are being kicked out they are told that if they come back “Pandora will send them home with a horrible virus that will wipe out humanity” but apparently this is just a threat by the pro-Pandora humans. So Cameron knew this threat fit into the logic of his story but didn’t want to (or didn’t see how to) make it an integral part of the story.

    Bottom line, the people who say Avatar is just Dances with Wolves with 3D CGI alien “natives” are right as far as they go. That was the movie Cameron planned to make. But I think we can make a legitimate case that the internal logic of Pandora, the Na’vi, etc. escapes from that formula and has its own very subversive implications. These implications subvert not only the characteristics of the Na’vi — they must be really high tech, only “at one with nature” because they designed it — but also our ideas of posthuman — it doesn’t need to involve metal tech and smart computers.

    And regarding the origin of the quadrupedal Na’vi vs. the hexapodal animals (why not hexapeds or quadrupods?) I still like my extreme version. We know from the historical evidence that interpretation (3) — a story about a posthuman high-tech Na’vi + trees symbiosis — wasn’t Cameron’s intention. But we also know (3) is more consistent with what we see in the film than any other backstory. So why not go the whole way and make our backstory fully consistent? The fact that humans identify with and even fall in love with Na’vi is a big tactical advantage to the Pandoran system, so why not say Pandora arranged that? It doesn’t stretch credulity any more than humans being able to grow avatars in the first place — and in writing a back story, we could easily make the avatar tech a covert “gift” from Pandora as well, transferred by subverting early human scientists.

    Let’s consider how that would play out in a “prequel”:

    Humans first visit Pandora a few decades before Avatar. This is an exploration ship, staffed mainly by scientists, but with some military / naval types as well.

    The scientists don’t encounter Na’vi, but they do study the hexapods and the trees, and they find the unobtainum. At some point a scientist dies on the planet and his / her mind is assimilated by the trees. Then the trees start to communicate covertly with some other scientists.

    With the help of the trees, scientists figure out some of the biology of Pandora, and figure out how to grow avatars, but initially not human-like ones. Pandora in turn figures out how to grow human-like Na’vi — maybe it even transfers the mind of the initial scientist who dies into one of the first Na’vi. (You could make the scientist Maori for linguistic continuity, since that’s what Cameron’s folks used as a linguistic base. Facial tattoos would be cool.)

    After a while, guided by the trees, the scientists “discover” Na’vi living in the jungle. Maybe before the ship leaves, some of the other scientists covertly “jump ship” by dying and getting reborn as Na’vi. Maybe they have to kill or subvert some of the military types to avoid discovery.

    (Actually, of course, the smart thing for the trees to do would be to clone some of these minds into multiple bodies. There’s also no reason the original has to die. But we rarely see narratives where the same person is multiply instantiated, except as a joke.)

    When the exploration ship gets back to earth, we see some of the floating tree sprites dispersing, putting down roots, and starting to grow as Earth-like trees. Maybe those trees even catch and reprogram some Earth fauna. So we know a “pod people” scenario (or as I prefer to think a “porkchop tree” scenario) is possible, but we don’t know how it will turn out.

    Pandora doesn’t need to send a virus to Earth, its minions could just create one here.

    One thing that’s missing in this picture: I’d expect the trees would find ways to create moles in the Earth human population as well. Offhand I don’t see how to factor that in.

    As written this lacks drama but I that’s why I’m not a fiction writer. I expect Cameron or someone else with the right skills would find it easy to put real people, dramatic tension, etc. into this framework.

    Lying? Stupid? just untrustworthy

    We still find ourselves debating whether an obviously false statement is due to lying or stupidity. I hoped this question would become less relevant with the end of the Bush administration but I was over-optimistic, as the recent health care “debate” has shown.

    But trying to make this distinction only helps those uttering the obvious falsehoods. They don’t care about informing us, have no real interest in what’s true or false. Their statements fit Harry Frankfurt’s definition of bullshit.

    So let’s just call these people “untrustworthy”. It doesn’t matter if they are lying or stupid. It doesn’t matter why they say these things. We can’t trust them to guide us or inform us. We should pay them as little attention as possible.

    Untrustworthy — and unworthy of our regard.

    Two kinds of technology

    I’ve had another thought about the backstory implicit in Avatar (see my previous post). Probably not that profound but it seems worth mentioning.

    Humans are tool creators and tool users, our technology consists of tools that have grown bigger and more powerful and become weapons, vehicles, computers, prostheses, etc. That is the human technology we see in Avatar.

    Tools are relatively simple (compared with our bodies and minds). Early tools are passive, requiring energy from us or domestic animals. We started making wide us of non-living power in the last couple of hundred years, and only began to build tools that can regulate themselves in significant ways in the last few decades.

    But under different circumstances maybe we would have developed biological technology that wasn’t primarily mediated by tools (we have a lot of this such as domestication of animals, brewing, folk medicine, yoga, etc. but it is secondary to our tool use). If we figured out how to build a complete suite of biotechnology this way we might not use tools very much at all.

    In this case we’d be almost entirely working with living things including our own bodies. These are active, very complex, and are intrinsically self-regulating.

    To create a primarily biological technology, we’d have to learn how to manage evolution, emergent behavior, symbiosis, etc. We’d have to work with the intrinsic self-regulating tendencies of living things, rather than just shaping passive material. We’d develop a very different set of values, assumptions, skills, and probably cultural patterns.

    Arguably imagining Pandora and the Na’vi as having developed through primarily biological technology is the best way to understand the world we see in Avatar.

    Avatar and the posthuman future

    Avatar is the best and most elaborate advertisement ever created for posthumanism.

    The posthuman message of Avatar is easy to miss because Cameron invents a new form of posthuman — the Na’vi, apparently primitive children of Eywa (the “world spirit”). None the less, the conclusion is unavoidable. Compared to the Na’vi, humans are small, weak, ugly, inept and morally deficient. Avatar’s protagonist is crippled as a human, but athletic as a Na’vi. The whole narrative drive of the film is to transcend the human body, the human condition and human society, to transition to a more perfect world — but a world that is very much material, embodied, and shaped and maintained by its inhabitants. And humans make the transition to the posthuman by “uploading” their minds into Na’vis — a bog standard posthuman trope.

    Admittedly, the Na’vi as tribal posthumans don’t fit into typical narratives of posthumanism. Both posthuman critical theory (e.g. Donna Haraway) and typical posthuman science fiction emphasize hard-edged scenarios such as cyborgs (the origin of the Star Trek borg, pretty much the opposite of the Na’vi), uploading our minds into computers and robots, etc. Furthermore, the “noble savage” aspect of the Na’vi seems to be the weakest, most cliched aspect of the movie.

    But arguably if we believe the Na’vi are “noble savages” we are underestimating Cameron, or at least underestimating the potential of the world he has created.

    Taking the Na’vi and Pandora at face value implies accepting a lot of incoherence. Many features of Pandora make no sense if we assume the ecosystem “just grew”. There’s no evolutionary reason for “horses” and “dragons” to plug into the nervous systems of “people”. There’s no evolutionary reason for all the trees to wire themselves together into a giant brain. And so forth.

    But we can make sense of Pandora if we grant that Na’vi biotech is extremely advanced. Suppose the entire Pandoran ecosystem evolved normally, until the (precursors of the) Na’vi got to the point where they had to (or wanted to) manage their entire planet (the very point we on earth are reaching now). They took the path of adapting themselves and their ecosystem to become fully self-managing, and then could “relax” into a more or less tribal culture.

    If the Na’vi engineered a self-managing planet, such techniques as networking the trees into a planetary brain, and providing ways to plug their minds into the nervous systems of plants and animals are sensible engineering solutions. The Na’vi transcend death by having their memories absorbed by the trees. This fits the goal of a sustainable system much better than making each individual biologically immortal (though potentially a society could do both).

    The giant Na’vi “mind meld” with the trees to form a powerful system capable of identity transfer between bodies, also makes sense within this account. No mystical trappings are required.

    The Na’vi have no need for a process of identity transfer between bodies, much less between human and Na’vi bodies. They must have created the process to deal with the situation, which implies that the Na’vi didn’t simply regress to a tribal culture. They retain the necessary knowledge — presumably stored in the trees — and the ability to integrate with the tree network to a level where they can rework nature, bodies etc. when needed. Again, that’s a good engineering solution to living in a simpler culture and still having high technology available when needed.

    Unfortunately there’s one fact about Pandora that still doesn’t fit our account: the Na’vi have bodies that are exactly analogous to humans, except for their tails, while all other animals on Pandora have six legs and breathe through holes in their necks. Obviously the Na’vi similarity to humans is required to engage the audience and to use motion capture, so we could just ignore it. But let’s treat it as a meaningful discrepancy and see where that leads.

    The surprising analogy between Na’vi and human suggests a plausible extension of our backstory. Perhaps the Na’vi were reshaped based on genetic samples and observations of the first human explorers, to serve as a “honey pot” that attracts human attention and interaction. In that case the Na’vi similarity to humans is another excellent design choice.

    If that is the “real” story, the honeypot strategy has obviously worked well (in both the system security and seductive senses). The corporations and military commanders are totally sucked in to the honeypot and are paying no attention to the real nature of their opponent. Only a few no-account scientists have even noticed that the trees are important.

    On the other hand, perhaps we should not think of the Na’vi as the dominant species on Pandora. The trees may in fact run the show (after all they are more or less worshiped by the Na’vi). Perhaps the trees have already snuck stealth versions to Earth, and are now in the process of slowly taking over our planet as well (see James Schmitz’ “The Porkchop Tree”). If the trees are in charge, perhaps we don’t have such a happy ending, but otherwise nothing fundamental changes. This account is more similar to posthuman scenarios in which intelligent machines are in charge but host humans or posthumans for inscrutable reasons. Such stories are often dystopian, but they can be quite positive (as in Ian Banks Culture novels).

    All of these accounts involve a very high level of bio-technology and a sophisticated approach to managing the whole planet. In any coherent account the “noble savage” schtick is just a cover story. Cameron’s vision implies a pervasively posthuman world.

    I don’t, of course, know if Cameron would endorse any of these accounts. But his vision of Pandora is more feasible and consistent than it at first appears, and adds an important dimension to imagining posthuman futures.

    Analyzing crazy beliefs

    Recently there’s been a renewed attempt in the liberal / scientific blogosphere to figure out what’s up with all the crazy social / political claims that keep erupting — about creationism, Obama, health care, global warming, etc. A new and I think potentially major step forward in this analysis has just been posted by Mike the Mad Biologist, building on two excellent posts by Slactivist (False witnesses, False witnesses 2). This analysis is the first one I’ve seen that makes me feel like I understand most of the craziness we are seeing today (and have seen in some form for many decades), and have at least a hope of figuring out how it will evolve, how we should respond, etc.

    The basic point is that the crazy stories (death panels, global warming conspiracies, Obama’s birth, etc. etc.) aren’t really “believed” as we understand that term, at least not by their most vigorous proponents. We use “belief” to mean ideas that are part of an overall picture that we intend to be coherent, to help guide our actions in the world (including in the lab if we are scientists), etc.

    Instead, these crazy “beliefs” are really a way of recruiting emotional and social support, declaring membership in a group, etc. So “believers” can’t be persuaded that the “beliefs” are “wrong” just because they are incoherent, lead to obviously wrong conclusions that the “believers” won’t adopt, etc. A strenuous attempt to persuade believers on pragmatic grounds just confirms you are not one of their crowd, can’t be recruited, and are probably one of the enemy. The post “False witnesses” referenced above has a very good discussion of this in some detail. It is worth reading because it is hard to imagine this state of mind (at least I find it hard) until you see it laid out in very specific terms.

    I don’t want to say the crazy stories aren’t “really” beliefs — though I’m not sure saying they are crazy beliefs is any nicer. Instead, let’s call the first kind of belief (aiming at coherence and effectiveness) “pragmatic”. We can call the second kind (aiming at recruiting or maintaining support) “participatory” beliefs. (I’m sure there are harmless and even charming participatory beliefs, as well as these crazy ones.) Realistically we all have both kinds, the question is which kind are dominant in any given area, how we react when they are challenged, etc.

    Properties of pragmatic vs. participatory beliefs
    Slacktivist usefully summarizes his expectations and how he found these extreme participatory beliefs actually work:
    I was operating under a set of false assumptions [viewing these as pragmatic beliefs]. Among these:
    1. I assumed that the people who claimed to believe [a particular crazy story] really did believe such a thing.
    2. I assumed that they were passing on this rumor in good faith — that they were misinforming others only because they had, themselves, been misinformed.
    3. I assumed that they would respect, or care about, or at least be willing to consider, the actual facts of the matter.
    4. Because the people spreading this rumor claimed to be horrified/angry about its allegations, I assumed that they would be happy/relieved to learn that these allegations were, indisputably, not true.

    All of those assumptions proved to be false. All of them. This was at first bewildering, then disappointing, and then, the more I thought about it, appalling — so appalling that I was reluctant to accept that it could really be the case.

    But it is the case. Let’s go through that list again. The following are all true of the people spreading the [crazy story]:

    1. They didn’t really believe it themselves [using the "pragmatic definition of belief].
    2. They were passing it along with the intent of misinforming others. Deliberately.
    3. They did not respect, or care about, the actual facts of the matter, except to the extent that they viewed such facts with hostility.
    4. Being told that the Bad Thing they were purportedly upset about wasn’t real only made them more upset. Proof that the [crazy story wasn't true] made them defensive and very, very angry.

    Rather than saying the people he was talking to “didn’t really believe it themselves” and intended to misinform others, I’d say that they didn’t care about the pragmatic dimension at all, and so didn’t consider their recruiting to be misinformation. Quite possibly they didn’t expect those they were trying to recruit to interpret the rumor as a pragmatic fact.

    This analysis has a lot going for it, much of it discussed rather well in these posts. Obviously participatory disagreements will be more like turf wars than practical discussions. As Mike says in the first post below, “part of the reason [for global warming denialism] is the ever-present desire to punch a hippie in the face” but he thinks that is a different issue. No, it is the same issue — hippies are cultural icons who stand for a different set of participatory beliefs incompatible with the main crazy participatory beliefs. (Obviously for this analysis it doesn’t matter if hippies really do have those beliefs or if hippies even exist.)

    The members of the tribes that tell these crazy stories fear they can’t recruit hippies (and in fact fear that hippies are dangerously capable of seducing their own weakly committed members). Punching them in the face is their sincerest form of acknowledgement.

    I think this analysis is a good guide to anticipating likely future scenarios, and to judging the effectiveness of possible actions. The worst scenarios are very bad, and while not highly likely I think they are credible. The 20th century leaves us with many examples of participatory “cults” that generated massive death, suffering, and social destruction (military cultures in Europe and Japan, Nazis, Soviet Communists, Maoists, Khymer Rouge, and so forth).

    What’s the role of religion?
    None of these posts focus on religion per se (though the crazy beliefs they talk about are especially relevant to evangelicals). And certainly some major participatory cults have been very hostile to religion (e.g. the Khymer Rouge, Maoism, etc.) — I suppose viewing it as competition.

    However I think just about all organized religion depends on participatory beliefs (some forms of buddhism may be exceptions). Even if a believer is otherwise rational, their religion says it is OK for them to have beliefs that are basically incoherent (or carefully not evaluated for consistency), that aren’t effective in guiding action (or aren’t evaluated in terms of effectiveness), etc. Evangelical religions, furthermore, are defined by recruiting others to their participatory beliefs — that’s what evangelism is.

    One of our constraints is that liberals have a participatory belief (or meta-belief) in pragmatic belief. We want to debate at the level of pragmatic beliefs (what is coherent, what will work) and avoid tribalism. Thus liberals can seem weak when they are attacked in social turf wars at the level of participatory belief. I guess this liberal participatory belief is partly historical, in that the liberal coalition (meta-tribe) was largely founded on the rejection of religious wars and the valorization of pragmatic choices relative to participatory beliefs, and partly structural, in that the liberal coalition still depends largely on uniting groups with partially incompatible participatory beliefs (liberal protestants, liberal catholics, liberal jews, liberal non-religious, liberal muslims, etc.).

    We don’t anyway want to respond to tribal attacks by organizing tribal counter-attacks — that just tends to pull everything down to the tribal level and would make our problems a lot worse. So as an initial response, the rejection of participatory, tribal responses by the liberal coalition is correct. However we can’t just respond with pragmatic arguments because that doesn’t work against participatory attacks. We have to actually take on the participatory attacks and defeat them — we just have to find ways of doing it that are better than fighting them on their own participatory terms.

    Designing notes

    I just spent a while adding a feature to my blog, or from another point of view removing an annoyance from my writing.

    I found myself often experiencing a conflict between either (1) leaving out explanations or details that were likely to be useful or entertaining to some readers, or (2) dumping content that is too extensive, too basic, too fancy, or whatever on other readers. Readers differ enough so there’s never really a “right” level and I found the resulting compromises uncomfortable. Most likely other writers of long-form posts experience the same conflict.

    My response is basically simple. I added notes that open in place when the note reference is clicked.

    Like this
    The note looks like this when it’s is open, and another click will close it.

    Why do something new?

    Dead tree media has a number of techniques for dealing with optional content. Discursive footnotes, sidebars, flagged sections, etc. are used for content that may be too detailed or too basic for the main line. But these methods don’t work very well on the web. Footnotes in particular, the only one of these methods that is supported in common authoring environments, is terrible on the web, because reading a footnote typically involves scrolling far away, so you lose all your visual context, and then scrolling back again to continue reading, so you can no longer refer to the footnote.

    On the other hand, we can show and hide content in web pages quite easily, since the advent of DOM twiddling libraries.

    What’s a DOM?
    Modern browsers provide a standard way of manipulating the content of a web page. The content of the page is represented as a set of “objects” that contain text, images, formatting information, etc. and this whole set of mechanisms is called the “Document Object Model” or DOM. Scripts embedded in the page (typically written in Javascript) are triggered by your actions and can basically change anything in the page. (For security, they can’t touch anything on your machine outside the page.) Modern web apps, like GMail, Google Maps, etc. are largely built out of scripts that manipulate the DOM.
    So on the web it seems more appropriate to display optional content in context, only when requested.

    The concept is simple. I looked around fairly carefully for some existing way to do this that I could adopt in my (WordPress) environment. There doesn’t appear to be anything available (I’d still be happy to use an existing solution so let me know if I missed one). There are plenty of ways to support footnotes at the bottom of the page, but no way to handle notes that open and close in context, and that integrate cleanly with the rest of the page. So, I went to the trouble of implementing my own.

    The basic mechanism is also simple. Unfortunately as could be expected with the web being a steaming pile of half-integrated standards and one-quarter documented software, the actual execution was not so simple and involved a couple of significant compromises. However in the end I hacked my way through it and have something that makes me fairly happy. The result is visible in the previous post and in this one.

    What’s the design?

    I pretty much stuck with notes (foot notes, end notes, etc.) as we understand them: a short reference (typically superscript), taking you to an arbitrary chunk of content that basically can contain anything. References are arbitrary text; they could just be numerals, letters, special characters, or whatever. They could all be the same — for example just asterisks. I prefer to use descriptive references so the reader doesn’t have to open a note to figure out whether they want to read it.

    To simplify the design, a reference takes you to the immediately following note. That way I don’t need any links, anchors or identifiers. Notes open in place just after the reference, but they could be floated left or right (looking more like sidebars).

    A number of pleasant properties emerged from the design.

    • All of the appearance of references and notes is determined by normal CSS.
      What’s CSS?
      Modern web content is marked up to indicate the intended usage, and then styles are defined to say how it should look. Styles can be applied with a clever mechanism called “Cascading Style Sheets” that basically say “Make everything that’s marked this way look like this“. There are very powerful ways to describe which things to style and how they should look.
      Even pulling notes out of the text and making them into marginal notes could be done by styling them differently.
    • Because we’re on the web notes can contain a very wide range of content; for example the steps of a tricky recipe could be annotated with embedded video, available for those who need it, out of the way for those who don’t.
    • The open and closed state of notes is persistent, so the user can pretty much tailor the text to suit themselves. For example in the case of the annotated recipe, some illustrations could be open and others could be closed as desired.
    • Notes can be nested to any depth. With my current styles the depth is indicated by the indentation. So far I haven’t actually needed to nest notes…

    What’s the mechanism and what are the compromises?

    I just defined custom elements for note references and notes, used jQuery to translate them into appropriately styled spans and divs respectively, and also used jQuery to add click handlers to open and close the notes.

    What does all this mean?
    jQuery is one of the “DOM twiddling” libraries I mentioned above, written in Javascript. Beyond that, I’m sorry to say, I don’t have the time or energy to explain this; realistically you need to have a basic knowledge of HTML, the DOM, and CSS to understand things from here on.
    jQuery was an excellent way to express this — very terse, but declarative and clear, aside from the clutter introduced by all the anonymous function syntax (unavoidable due to Javascript).

    All that took a few hours, from the time I decided to do it (and basically I didn’t know jQuery or Javascript). The core notes implementation should work in just about any environment that can load jQuery.

    Then the “fun” began.

    the “p” tag is evil

    I had problems getting my notes to behave nicely; they kept introducing line breaks even when closed (and styled with display:hidden). Eventually I figured out that notes are incompatible with “p” tags, basically because of historical constraints. I’ll explain the specifics in another post so possibly other people trying to figure this out will find the answer. So in posts where I use notes, I have to use appropriately styled divs rather than “p” tags.

    and feeds aren’t styled!

    Now the rendering of a post depends heavily on the styles. But the posts in my feed didn’t have the style sheet, and feed readers strip all embedded style information. With unstyled divs rather than “p” tags, the feed version of the post was nearly unreadable.

    So I had to figure out how to generate “p”s rather than divs when the post content was written out to the feed. WordPress has filter hooks for this kind of thing, but it turned out it doesn’t have a hook to process only content being written to feeds, so I had to find the right place to hack in another filter hook. (I may also explain this in a post; I’ve requested that WordPress add a standard hook for this.)

    how can I make notes look OK in feeds?

    Also, as unstyled divs (or “p”s), the notes couldn’t be distinguished from the main text. The only way to control the rendering of content in feeds is by choice of elements. So when generating a feed, I needed to either delete the notes or translate them into elements that would “look like notes” in a feed reader. After some thought I realized I could use definition lists, which look quite “note like” in typical feed readers, and even nest well.

    What does a definition list look like?
    I didn’t know either until I went looking. Like this:
    Note 1
    Some note content.
    Note 2
    Some more note content.

    Perhaps going on somewhat longer.

    Lots of leftovers

    Now I basically have a blog with notes that work the way I want, within the limits of current web standards and software. But I still have a lot of cleanup before I feel done.

    • I hacked an existing text replacement plugin for WordPress to use my hacked in filter hook. I should generalize my hack and release the plugin.
    • The jQuery script does two searches when it should just do one; I need to figure out the right jQuery idiom.
    • Authoring now requires manually generating the note markup. This is pretty easy but I should probably write some authoring macros.
    • Content isn’t properly styled when printing and perhaps in other cases; I need to fix that, probably with more WordPress filtering.
    • For extra credit, printing should reflect the customized state of the page (which notes are open or closed); this is likely to be hard, but maybe I can figure out how by deciphering some apps that do it, like Google Maps.
    • I need to figure out how to get the filter hook adopted by standard WordPress so I don’t have to maintain a forked version.

    Next Page »