Leaving knowledge on the table

Yesterday I had a very interesting conversation with an epidemiologist while I was buying a cup of coffee (it’s great to live in a university town).

She confirmed a dark suspicion I’ve had for some time — large population studies do a terrible job of extracting knowledge from their data. They use basic statistical methods, constrained by the traditions of the discipline, and by peer review that has an extremely narrow and wasteful view of what count as valid statistical tools. She also said that even if they had the freedom to use other methods, they don’t know how to find people who understand better tools and can still talk their language.

The sophisticated modeling methods that have been developed in fields like statistical learning aren’t being applied (as far as either of us know) to the very large, rich, expensive and extremely important datasets collected by these large population studies. As a result, we both suspect a lot of important knowledge remains locked up in the data.

For example, her datasets include information about family relationships between subjects, so the right kind of analysis could potentially show how specific aspects of diet interact with different genotypes. But the tools they are using can’t do that.

We’d all be a lot better off if some combinations of funding agencies and researchers could bridge this gap.

Netminds–present and future

Shel Kaphan commented on “Dancing toward the singularity“:

Personally, I am comfortable with the idea of participating in… groupings that may involve machines and other humans and have their own entity-hood, and I’m comfortable with the idea that my brain probably has already adapted and become dependent on substantial immersion in computing environments and other technology, and I know what it is like to be part of an early generation, fairly tightly coupled, computing-enhanced group with a focus. I’m just saying the name “hybrid system” as such doesn’t sound either desirable or healthy.

And of course he’s right, who wants to be “hybridized” or “part of a hybrid system”? Ugh, terrible marketing.

So from now on, I’ll call them “netminds”: groups of people and machines working together so closely that they form a thinking entity.

People who become part of a netmind don’t lose their own identity, but they adapt. A moderately good analogy is a dance troupe, a repertory theater company or a band. Each individual retains their own identity but they also adapt to the group. At the same time, the troupe or band selects people who fit its identity (maybe unconsciously). And over time the group identity, the set of members and to some extent the individuals (and brains) co-evolve. So the individual and group identities are in a complex interplay.

This interplay will get much more intense as humans and machines get more tightly coupled. The tightest groups could be much closer than any today, with individuals interacting through machine interpretation of details of muscle tension, micro-gestures, brain state, etc. etc. In such a group people would be “inside each others heads” and would need to give up most personal boundaries between group members. The boundary would fall between the netmind and the outside world.

The fullest exploration of such a merger (without machines) is Vernor Vinge’s Tines in A Fire Upon the Deep. But even Vinge can only sustain the dual point of view (individual and netmind) in places, and elsewhere falls back into treating the netmind as a monolithic entity. This may be necessary in a narrative that appeals to normal humans. Joan Vinge explores the emotional side of netminds in Catspaw.

Netminds today

Does it make sense to talk about netminds as existing today? I think it does, although today’s netminds are relatively weakly coupled.

Gelled development teams, working closely together in a shared online environment, are netminds. The level of coupling we can attain through a keyboard is pathetically low, but as anyone who has been part of such a team can attest, the experience is intense and the sense that one is part of a larger entity is strong.

Quite likely a guild in an online game is a netmind, especially when they are engaged in a raid. I don’t personally have any experience with this, but since it is a more or less real-time experience, it probably has some interesting attributes that are mostly lacking in the software development case.

At the other end of the spectrum, we might want to call some very large, diffuse systems netminds. An appealing example is the Wikipedia editors plus the Wikipedia servers (note that I’m not including readers who don’t contribute). Here the coupling is fairly weak, but arguably the resulting system is still a thinking entity. It forms opinions, makes decisions (albeit with internal conflicts), gets distracted, etc. We can also see the dynamics that I describe above: individuals adapt, some individuals are expelled, the netmind develops new processes to maintain its integrity, and so forth. Human groups without network support do the same kinds of things, but a non-networked group the size of Wikipedia would “think” hundreds or thousands of times more slowly, and probably couldn’t even remain a coherent entity.

I suppose we could even call the whole web plus the Google servers a netmind, in the weakest possible sense. (Probably it is only fair to include all the other search and ranking systems as well.) Because the coupling is so weak, the effect on individual identity is minimal, but people (and certainly websites) do adapt to Google, and Google does exclude websites that participate in (what it considers) inappropriate ways. Furthermore Google works fairly hard to retain its integrity in the face of challenges from link farms, click fraud, etc. But this case is so large and diffuse that it stretches my intuition about netminds past its limits.

Netminds tomorrow

Let’s return to the more tightly coupled cases. Humans seem to naturally get caught up in intense group activities. Usually immersion in the group identity is fleeting — think of rock concerts, sports events, and riots. But intense creative group activity can generate a prolonged emotional high. Many physicists who worked on development of the atomic bomb at Los Alamos remembered it as the peak experience of their life. Engineers can get almost addicted to intense team development.

We already have the technology to make gaming environments fairly addictive, even without intense human interaction; there’s a reason Everquest is called Evercrack.

It’s easy to imagine that tightly coupled netminds could exert a very powerful emotional hold over their participants. Netminds will tend to move in the direction of tighter, more intense bonds on their own, since they feel so good. As our technology for coupling individuals into netminds gets better, we’ll have to be careful to manage this tendency, and there are certain to be some major, highly publicized failures.

A related problem is exemplified by cults. Cults don’t provide the emotional high of intense creative effort; they seem to retain people by increasing their dependency and fear of the outside world. Probably technology for tight coupling could be exploited to produce cult-like bonds. Net cults are likely to to be created by exploitative people, rather than arising spontaneously, and such phenomena as cult-like info-sweatshops are disturbingly likely — in fact they arguably already exist in some online games.

Whether creative or cult-like, tightly coupled netminds are also likely to shape their participants brains quite strongly. The persistent personality changes in cult members, long term hostages, etc. are probably due to corresponding changes in their brains — typically reversible, but only with difficulty. Participants in tightly coupled creative groups probably undergo brain changes just as large, but these changes they tend to enhance the individual rather than disabling them, so they produce less concern. Nobody tries to deprogram graduate students who are too involved with their lab.

We already know enough to build netminds that would deliberately induce changes in participants’ brains. We’re already building systems that produce outcomes similar to very disciplined practice. But tightly coupled systems could probably go far beyond this, reshaping brains in ways that ordinary practice could never achieve. As with most of these scenarios, such reshaping could have major beneficial or even therapeutic effects or could go horribly wrong.

Capitalists vs. Entrepreneurs

This post was catalyzed by a post at The Technology Liberation Front. Thanks to those who debated me in those comments, you helped to clarify my points.

I was responding to this key point:

[P]eer production isn’t an assault on the principles of a free society, but an extension of those principles to aspects of human life that don’t directly involve money. ….

[A] lot of the intellectual tools that libertarians use to analyze markets apply equally well to other, non-monetary forms of decentralized coordination. It’s a shame that some libertarians see open source software, Wikipedia, and other peer-produced wealth as a threat to the free market rather than a natural complement.

Since peer production is an entirely voluntary activity it seems strange to view it as a threat to the free market. (My interlocutors in the comments demonstrated that this view of peer production is alive and well, at least in some minds.) So how could this opinion arise? And does it indicate some deeper issue?

I think viewing peer production as a threat is a symptom of an underlying issue with huge long-term consequences: In peer production, the interests of capitalists and entrepreneurs are no longer aligned.

I want to explore this in considerably more detail, but first let’s get rid of a distracting side issue.

It’s not about whether people can make money

The discussion in the original post got dragged into a debate about whether people contribute to open source software (and presumably peer production in general) because it “is a business” for them. This belief is easy to rebut with data.

But this is a side issue. I’m not arguing that people can’t increase their incomes directly or indirectly by participating in peer production. Sometimes of course they can. My point is that the incentives of entrepreneurs (whether they work for free, get consulting fees, or go public and become billionaires) and capitalists (who want to get a return on something they own) diverge in situations that are mainly coordinated through non-monetary incentives.

Examples and definitions

Let’s try to follow out this distinction between entrepreneurs and capitalists.

For example, Linus Torvalds is a great entrepreneur, and his management of the Linux community has been a key factor in the success of Linux. Success to an entrepreneur is coordinating social activity to create a new, self-sustaining social process. Entrepreneurship is essential to peer production, and successful entrepreneurs become “rock stars” in the peer production world.

A capitalist, by contrast, wants to get a return on something they own, such as money, a domain name, a patent, or a catalog of copyrighted works. A pure capitalist wants to maximize their return while minimizing the complexity of their actual business; in a pure capitalist scenario, coordination, production and thus entrepreneurship is overhead. Ideally, as a pure capitalist you just get income on an asset without having to manage a business.

The problem for capitalists in peer production is that typically there is no way to get a return on ownership. Linus Torvalds doesn’t own the Linux source code, Jimmy Wales doesn’t own the text of Wikipedia, etc. These are not just an incidental facts, they are at the core of the social phenomenon of peer production. A capitalist may benefit indirectly, for a while, from peer production, but the whole trend of the process is against returns on ownership per se.

Profit

Historically, entrepreneurship is associated with creating a profitable enterprise. In peer production, the idea of profit also splits into two concepts that are fairly independent, and are sometimes opposed to each other.

The classical idea of profit is monetary and is closely associated with the rate of (monetary) return on assets. This is obviously very much aligned with capitalist incentives. Entrepreneurs operating within this scenario create something valuable (typically a new business), own at least a large share of it, and profit from their return on the business as an asset.

The peer production equivalent of profit is creating a self-sustaining social entity that delivers value to participants. Typically the means are the same as those used by any classical entrepreneur: creating a product, publicizing the product, recruiting contributors, acquiring resources, generating support from larger organizations (legal, political, and sometimes financial), etc.

Before widespread peer production, the entrepreneur’s and capitalist’s definitions of success were typically congruent, because growing a business required capital, and gaining access to capital required providing a competitive return. So classical profit was usually required to build a self-sustaining business entity.

The change that enables widespread peer production is that today, an entity can become self-sustaining, and even grow explosively, with very small amounts of capital. As a result it doesn’t need to trade ownership for capital, and so it doesn’t need to provide any return on investment.

As others have noted, peer production is not new. The people who created educational institutions, social movements, scientific societies, etc. in the past were often entrepreneurs in the sense that I’m using here, and in their case as well, the definition of success was to create a self-sustaining entity, even though it often had no owners, and usually produced no “profit” in the classical sense.

These concepts of “profitability” can become opposed when obligations to provide classical profits to investors prevent an entity from becoming self-sustaining. In my experience, many startups die because of the barriers to participation that they create while trying to generate revenue. Of course if they are venture funded, they typically are compelled to do this by their investors. Unfortunately I don’t know of any way to get hard numbers on this phenomenon.

Conversely, there are examples where a dying business becomes a successful peer-production entity. The transformation of Netscape’s dying browser business into the successful Mozilla open source project is perhaps the clearest case. Note that while Netscape could not make enough profit from its browser to satisfy its owners, the Mozilla foundation is able to generate more than enough income to sustain its work and even fund other projects. However this income could not make Mozilla a (classically) profitable business, because wouldn’t come close to paying for all the contributions made by volunteers and other companies.

Current pathologies of capitalism

The conflicting incentives of entrepreneurs and capitalists come into sharp focus around questions of “intellectual property”. One commenter complained about open source advocates’ attacks on “software patents, … the DMCA and … IP firms”. These are all great examples of the divergence between ownership and entrepreneurship.

The DMCA was drafted and lobbied into existence by companies who wanted the government to help them extract money from consumers, with essentially no innovation on their part, and probably negative net social value. In almost every case, the DMCA advocates are not the people who created the copyrighted works that generate the revenue; instead they own the distribution systems that got those works to consumers, and they want to control any future distribution networks.

The DMCA hurts people who want to create new, more efficient modes of distribution, new artistic genres, new delivery devices, etc. In general it hurts entrepreneurs. However it helps some copyright owners get a return on their assets.

The consequences of patents and other IP protection are more mixed, but in many cases they inhibit innovation and entrepreneurship. Certainly patent trolls are an extremely clear example of the conflict — they buy patents not to produce anything, but to sue others who do produce something. Submarine patents (like the claimed patents on MP3 that just surfaced) are another example—a patent owner waits until a technology has been widely adopted (due to the work of others) and then asserts the right to skim revenue from ongoing use.

Intellectual property fragmentation is also a big problem. In many domains, especially biomedical, valuable innovations potentially require the right to practice dozens or even hundreds of patents, held by many different entities. Entrepreneurs often can’t get a new idea to market because the owners of these patents can’t all be brought to an agreement. Each owner has a perverse incentive to be the last to agree, so they can get any “excess” value. Owners also often overestimate the potential returns, and demand a higher “rent” than can actually be sustained. This phenomenon is called the “tragedy of the anti-commons“.

All of these issues, and other similar ones, make it harder for small companies, individuals and peer production projects to contribute innovation and entrepreneurship. Large companies with lawyers, lobbyists, and defensive patent portfolios can fight their way through the thickets of “intellectual property”. Small entrepreneurs are limited to clearings where they can hope to avoid IP problems.

Conclusion

Historically many benefits of entrepreneurship have been used to justify capitalism. However, we are beginning to see that in some cases we can have the benefits of a free market and entrepreneurship, while avoiding the social costs imposed by ensuring returns to property owners. The current battles over intellectual property rights are just the beginning of a much larger conflict about how to handle a broad shift from centralized, high capital production to decentralized, low capital production.

Networks of knowledge

My recent metaphysics post touches on a question I’ve been thinking about for some time: How can we judge whether a given domain of inquiry or a theoretical proposal is credible or not? Of course this is a very hard question, but I think we should pay more attention to an aspect of it that can give us at least retrospective insight.

Some domains were once very important, but have completely lost any credibility — for example astrology. Some domains have been losing credibility for a long time but haven’t been completely written off — for example Freudian psychological theory. Some domains seem quite credible but are being vigorously attacked by investigators who are themselves credible — for example string theory. Also, in some cases, proposals that were broadly rejected were later largely adopted, sometimes after many decades — for example continental drift, reborn as plate tectonics.

Philosophers of science, and many epistemologists, mine these historical trajectories for insight into the broader question. There are diverse approaches to explaining the success or failure of various theories and research programs. However I think it is fair to say that the vast majority of these attempts are “internalist”, in the sense that they focus on the internal state of a research program over time. Different approaches focus on formal characteristics of a sequence of theories, social and historical factors, methodological factors, etc. but almost all accounts assume that the answer is to be found within the research program itself.

I’d like to propose a different perspective: We can judge the health of a research program by its interactions with other research programs. As long as a research program is actively using and responding to results from other domains, and as long as other domains are using its results, or working on problems it proposes, it is healthy and will remain credible. If it proves to be incapable of producing results that other domains can use, or even worse, if it stops responding to new ideas and challenges from external research, it is on its way to becoming moribund.

Looking back at the historical trajectories of many research programs, this criterion works quite well. It is not very hard to see why this could be the case. Investigators in any given domain are constantly making practical judgments about where to spend their effort, what ideas proposed by others they should trust, etc. (Kitcher discusses this point in detail in The Advancement of Science.) Investigators who might take advantage of results from a given external domain have a strong incentive to make accurate assessments of whether those results can actually contribute to their work. Furthermore, they have a lot of information about how reliable, relevant, and easy to use a given result is likely to be (compared, for example, with an historian or philosopher). So if a research program isn’t generating useful results, its neighbors will sense that, and will have strong incentives to accurately reflect their judgment in their research practices.

However I think the implications are actually much deeper than these obvious (and probably valid) factors. For example, the trajectories of research programs are often dramatically shifted by new techniques that depend on external results. Plate tectonics became became dominant through a dramatic shift in opinion in 1966 largely as a result of improved measurements of magnetic orientation in sea floor rocks. Paleontology and archeology have been dramatically affected multiple times by improvements in dating based on physics. Evolutionary biology has been hugely reshaped by tools for analyzing genetic similarity between species. Etc.

Such shifts open up major new interesting questions and opportunities for progress. But they are much less likely to occur in domains that, for whatever reason, are cut off from active interchange with other research programs. Also, some reasons why a domain may be cut off — the desire to protect some theoretical positions, for example — will also tend to cause internal degeneration and ultimately loss of credibility.

More generally, my criterion reflects the fact that all research programs exist within a network of related activities — technical, intellectual, educational, etc. — without which they would wither and die. In essence, I’m advocating taking this network, and its changes over time, more seriously.

This criterion doesn’t engage in any obvious way with the usual question “Are these theories true?” (or at least, becoming more true, if we can figure out what that means). I’m not even sure that I can show that there is a strong connection.

Possibly this indicates that my criterion is fatally flawed. Or possibly it means I should look harder for a connection. But I suspect that this actually means that the idea of “truth” does not work very well at the scale of research programs. If a scientist is reporting experimental results, “truth” may be a very appropriate criterion, especially if we are concerned about fraud or sloppiness. But in these larger issues we should probably try to sharpen our criteria for pragmatic usefulness, and not waste time arguing about truth.

Dancing toward the singularity

Vernor Vinge gave a talk in the Long Now Foundation seminar series last week (which is great, by the way, you should go if you can). Stewart Brand sent out an email summary but it isn’t on the web site yet.

As Brand says, “Vinge began by declaring that he still believes that a Singularity event in the next few decades is the most likely outcome — meaning that self-accelerating technologies will speed up to the point of so profound a transformation that the other side of it is unknowable. And this transformation will be driven by Artificial Intelligences (AIs) that, once they become self-educating and self-empowering, soar beyond human capacity with shocking suddeness.”

At Stewart’s request, Vinge’s talk was about knowable futures – which by definition mean that the singularity doesn’t happen. But the follow up questions and discussion after the talk were mostly about the singularity.

All of this has crystallized my view of the singularity. The path isn’t all that strange, but I now have a much better sense of the details, and see aspects that haven’t been covered in any essays or science fiction stories I know.
More below the fold

What’s special about Second Life?

Second Life is getting worked over by the folks at Terra Nova, and also the folks at Many2Many, and I’m sure in lots of other places. I’m not a Second Lifer myself (and neither are they, as far as I can tell). Also I’m not interested in most of the issues they’re discussing. But Second Life is clearly an interesting phenomenon, at least for the moment, and so I ask myself: What is SL introducing that is new and will survive, either in SL or through imitation in other virtual worlds? And once we roughly understand that, how easy will it be to duplicate or go beyond the success of Second Life, especially in more open systems?

Based on reading various accounts, and talking to people with a little experience of Second Life, below are my lists of what’s new in SL, what’s integrated in an interesting way, what is typical of MMOs in general, and what emerges in SL more than other MMOs. Quite likely I’ve gotten some of these wrong and missed significant points, so I welcome correction and extension.

  • Unusual or unique in Second Life:
    • Internal economy with micro-payments, plus currency exchange
    • In-game crafting of objects, including 3D modeling and scripting
    • Highly customizable avatars, including movement scripting
    • In-game crafting of avatar customizations, scripts and clothes
    • Client-server support for “dynamic” world model, which requires different mechanisms than the more pre-built worlds of most MMOs
  • Newly integrated in Second Life:
    • Streaming audio for performances
    • Imported graphics, streaming video?
    • Users buy hosting for parts of the environment
  • Similar to other MMO environments:
    • 3D experience with landscape, avatars, buildings, objects, etc.
    • Scripted objects (but letting users script is unusual or unique)
    • Large scale geography
    • Property “ownership”, sales, etc.
    • Chat
    • Social activities and social structures
  • Emergent in Second Life more than other MMOs:
    • Complex user-built landscapes
    • Businesses based on user-built objects
    • Complex economics
    • Complex social organization

Several of the things that are currently unique to Second Life are natural extensions of existing technology, including in-game crafting and scripting, customizable avatars, and the dynamic world model. Crafting and scripting can probably be implemented much better than they are in SL, using existing open source languages and 3D modeling techniques.

Other aspects of Second Life, however, depend on a critical mass of motivated users. The economic model, including micro-payments, and the creation of a diverse environment and economy, depend on a fairly large population, investing considerable effort over a reasonable period. This dependence on scale and long term investment will make these aspects of SL hard to duplicate or surpass, especially through relatively fragmented efforts.

Social value, exchange value, and leaving money on the table

Kevin Burton wrote an interesting post which brought this topic into focus for me. I’ll apologize in advance: my post raises some interesting questions, but doesn’t provide much in the way of answers.

Kevin talks about the amount of money CraigsList is leaving on the table, by not running advertising. As it happens, I’ve had a similar conversation with Jimmy Wales, in which he described the issue similarly to his interview here.

Both Wikipedia and CraigList have the potential to generate *hundreds of millions* of dollars in revenue that they have chosen to forgo.

I would guess there are a lot of other organizations like this — their current income is more than enough to support the services they provide, but they generate far more social value than they capture in revenue, and as a result they could easily generate lot more revenue without significantly impairing their service.

This revenue forgone could amount to multiple billions across all the relevant services.

So why does Economics 101 not operate here; why do these organizations have the power to generate so much more revenue than they need to operate? Note that this is probably not temporary in at least some important cases — both Wikipedia and CraigsList are likely to continue to become more entrenched and generate more social value, so they will leave more money on the table.

Of course the answer is that network effects dominate in both of these cases, and probably many others. The unusual thing about these cases is that they have decided to forgo the revenue, making the huge gap visible. A lot of other companies with some form of major entrenchment maximize their revenue and then burn it up in waste of one sort or another, or pass it on to their shareholders as dividends.

Not only that, but the usual Econ 101 justification for making lots of money — it helps increase investment in that type of business — is a non-starter. CraigsList and Wikipedia don’t need more investment to grow, and investment by others is unlikely and anyway probably wouldn’t increase social value. The economic signal generated is useless, possibly harmful.

In many of the businesses mentioned above, the economic signal is also misleading or harmful. More investment in one of these powerful businesses often won’t generate more social value.

Furthermore, if any social value is clearly lost in this situation, it is the value to *users* of being shown highly relevant advertisements. The clearest loss of social value is the absence of ads itself! In this respect, the monetary signal is perhaps accurate.

So one question that arises from these cases is how big a market failure we are looking at, and what sort of institutional changes would fix it, or at least improve the situation? Based on my own experience, my guess is a really huge failure. But I don’t have institutional proposals at this point.

Further questions are raised by Kevin’s suggestion that Craig take the money and give it to charity (or perhaps social reinvestment). Wales told me that a significant group of Wikipedians wants the same thing. I wouldn’t be hostile to this, but it raises interesting questions. Why should the money not be left to its current owners to spend as they wish, perhaps on charity? In this case, since the money would come from advertising, it would presumably be left with companies, not individuals.

Of course this is an argument we hear for reducing taxes. In this case, it seems like a clearly bad idea; the companies will just end up spending it on less efficient advertising, or dissipating in other ways.

So this raises another question: When should a big organization take our money (or someone else’s), because it can make better use of it than we could?

Now of course we feel that taxes are “taken” while the money these organizations would get is “freely given”. Or as a libertarian might argue “The government has no right to take my money by force”. But even that argument is more complicated. Both the government and Wikipedia get our money because they are entrenched and risky to replace. Replacing our current government (not the people, the institution itself) would be messy and would involve a major risk of very costly chaos. That is why governments mostly get the population to go along with them, even if they are not doing a great job.

Wikipedia would be easier to replace than a government, but as Wikipedia gets more entrenched (as an institution and community, not a pile of content) it will be harder and harder to avoid it or replace it, even if it screws up more often than it needs to, or develops some kind of persistent unnecessary bias. And yet, Wikipedia has clearly not gone out of its way to entrench itself. It gives away its code and its data. All of its operating decisions are open. It has no power over contributors or users, except everyone’s knowledge that if they want to use or contribute to an encyclopedia, Wikipedia is the best place to go.

So perhaps governments to some significant extent are the same. Even though they do attempt to assert a monopoly on force, maybe that is a symptom not a cause of their dominance (at least in the better cases). And maybe if Wikipedia or CraigsList gets sufficiently entrenched it will seem imposed on us. After all, if we signed a contract to advertise through CraigsList, then we’d have to pay, on pain (ultimately) of force. And what if we felt we had no choice but to advertise through CraigsList?

So, a final set of questions: What can these organizations teach us about the morality of power? Do we need to worry that they’ll abuse their power? And conversely, how far could we go toward improving our society by better understanding what makes these organizations unusually benign?

« Previous Page