What’s wrong with Stephen Turner’s Social Theory of Practices

In The Social Theory of Practices: Tradition, Tacit Knowledge, and Presuppositions (1994), Stephen Turner mounts a sophisticated attack on the idea of “social practices” as some kind of supra-individual entities. (I will call these “coarse grained entities” below, to avoid the value laden implications of “higher level” or similar locutions.) This attack is part of a broad effort to support methodological individualism and attack any theory or evidence that contradicts it.

This problem is important in its own right, but it gains additional significance in the context of population thinking. If “only the population is real” then should we regard claims about coarse grained entities as fictitious? Of course my answer is “no”, but this requires careful analysis. We want to develop a firm distinction that allows us to adopt a realistic ontology for these coarse grained entities, but reject any treatment of them as abstract entities that somehow exist independent of the population.

Turner’s book is worth a response because it is a relatively clear and thoughtful examination of the argument against supra-individual entities. Analyzing Turner can help us figure out why he (and the methodological individualists in general) are wrong, but also can bring into clearer focus the nature, dynamics, and importance of coarse grained entities.

More below the fold

Population thinking and institutions– quick sketch

Here are a few ideas about how population thinking applies to various institutions, to be elaborated in some future posting:

Yochai Benkler’s peer-production arguments are based on population thinking. He explores the implications of recruiting project members from large populations, where the potential members have substantial variation in “fit” to the project.

Eric Raymond’s reasoning in “The Cathedral and the Bazaar” is based on population thinking in ways similar to Benkler’s (not surprisingly). “With many eyes all bugs are shallow” only makes sense in the context of a population with the appropriate variation in debugging skills. The whole “bazaar” idea is basically a population-based approach, while “cathedrals” (in Raymond’s usage) very much attempt to approximate ideal types. (Cathedral construction was probably much more population-based in real cases.)

Large open source projects are decisively population oriented. Consideration of these projects brings to the surface a central problem in applying population thinking to institutions. At any given time much of the structure and activity is defined by a small number of participants, and it is hard to regard them as a “population” – their individual characteristics and their relationships have a big influence on the project. Over a somewhat longer time scale, however, the populations (of users, contributors, partners, etc.) in which they are embedded have a major and often decisive influence on the evolution of the project. Often the core group is replaced over time (usually voluntarily) and the replacement process depends on the larger populations as much as the prior structure of the core group. (I’m talking here as though there was a clear boundary for the core group, but of course that is importantly incorrect. The core group merges into the population and individuals can migrate smoothly in and out, without necessarily crossing any clear-cut boundaries.)

Large companies should probably be understood (and managed) largely in terms of population thinking. Legal frameworks impose a significant amount of typological structure on companies (corporations, tax rules, accounting rules, etc. etc.). However when a company of more than a few people succeeds it is because of population dynamics – the dynamics of the internal population structure, and the dynamics of the external populations of potential partner companies and/or populations of individual customers.

The only workable view of science that I know is population based. (Often this is called “evolutionary epistemology”, but a lot of work still has to be done to figure out why science works so much better at generating knowledge than other institutions, even given the evolutionary framework.) Previous views, and most current views, attempt to find “rules” for scientific activity that guarantee (an approximation to) “truth” – obviously typological thinking, and more importantly clearly wrong; this “rule governed” approach can’t be made to work either retrospectively (trying to account for the real history of science) or prospectively (trying to give scientists recipes for how to do their job).

Operations like Ebay depend critically on population thinking. Ebay is a place for populations to coordinate. (Not so much true of Amazon, though the book reviews etc are population oriented.)

Duncan Foley has shown that population models of economic activity (using thermodynamics) can produce economic equilibria that model real economic data better than the standard neo-classical mechanisms (which are based on deterministic auction processes). Furthermore the equilibria produced by these population models are much more robust than neo-classical equilibria under random perturbation. Also the neo-classical process of convergence to equilibrium depends on absurd assumptions about the rationality of the participants and the information processing capacity of the market mechanisms; the assumptions required for a population approach are completely reasonable.

Computing as an ideal type

Unfortunately I think the idea of brains / minds “computing” (as typically discussed) is an example of type thinking. Certainly people do something that is functionally equivalent to what we normally call computing in many situations. More generally, people (and animals, and even plants and bacteria) process information in the Shannon sense (i.e. not discrete symbols but negative entropy). But the mechanisms are based on interactions within populations (of cells, especially nerve cells) and can’t be understood beyond a certain point without taking that into account.

I’ve been reading an interesting (and relatively accessible) book on cellular information processing called The Touchstone of Life, by Loewenstein. It is entirely about how information is managed (at the molecular level) in cells, and it certainly could be seen as describing computation. However it demonstrates a completely different way of thinking about information processing than our typical algebraic / logical models, so it is a good corrective for those of us (and I certainly include myself) who’ve had way too much practice with algebraic style computation.

Ideal types considered harmful

I had to prepare a talk recently on “Science as Social Practice” and was struck by a quote from Ernst Mayr (via Three Toed Sloth):

The assumptions of population thinking are diametrically opposed to those of the typologist. …. For the typologist, the type (eidos) is real and the variation an illusion, while for the populationist the type (average) is an abstraction and only the variation is real.

Ernst Mayr, What Evolution Is (Basic Books, 2002), p. 84, quoting a 1959 paper of his own.

Discussions of this issue note that “type thinking” attributes variation to error (or in Chomsky’s case to “performance”), while “population thinking” sees variation as contributing to competence. Of course Chomsky does talk about populations at the level of speakers, and notes that different speakers typically have learned different syntaxes. But he doesn’t believe that population mechanisms could underpin syntactic competence in an individual.

I think that a lot of philosophical and linguistic thinking is attempting to work with ideal types when what we’ve actually got is populations at multiple levels, and they can’t be adequately understood by “type” thinking. I’m sympathetic to this tendency because I think type ideas are much more natural for humans. Unfortunately they break down in important cases, just like geocentric astronomy or Newtonian space/time. This comes out most clearly when systems are changing (e.g. children learning language or scientists inventing new language).

Learning syntax

This paper by Elman does a good job of showing two things highly relevant to the philosophy of mind (as currently pursued):

  • How statistical learning can acquire compositional structure, and
  • How structural properties of language can be learned without innate syntax.

I see that Gary Marcus has criticized Elman from a (more or less) Fodorian perspective, but Elman has been able to generate exactly the results that were supposed to refute him. The pattern seems to be that critics assume connectionist models that are much weaker than we can actually build today, and much weaker than the facts of human biology and learning would suggest.

Can we declare the poverty of stimulus argument dead now?