Who’s in charge here?

In a very useful post, Jonah Lehrer wonders:

…if banal terms like “executive control” or “top-down processing” or “attentional modulation” hide the strangeness of the data. Some entity inside our brain, some network of neurons buried behind our forehead, acts like a little petit tyrant, and is able to manipulate the activity of our sensory neurons. By doing so, this cellular network decides, in part, what we see. But who controls the network?

I posted a comment on Jonah’s blog but it took so long to get approved that probably no one will see it. So I’m posting an enhanced version here.

Jonah’s final sentence, “But who controls the network?” illustrates to me the main obstacle to a sensible view of human thought, identity, and self-regulation.

We don’t ask the same question about the network that controls our heart rate. It is a fairly well defined, important function, tied to many other aspects of our mental state, but it is an obviously self-regulating network. It has evolved to manage its own fairly complex functions in ways that support the survival of the organism.

So why ask the question “Who controls it?” about attentional modulation? We know this network can be self-controlling. There are subjectively strange but fairly common pathologies of attentional modulation (such as hemi-neglect where we even understand some of the network behavior) that are directly traceable to brain damage, and that reveal aspects of the network’s self-management. We can measure the way attention degrades when overloaded through various cognitive tasks. Etc. etc. There’s nothing fundamentally mysterious or challenging to our current theoretical frameworks or research techniques.

Yet many people seem to have a cognitive glitch here, akin to the feeling people had on first hearing that the earth was round, “But then we’ll fall off!” Our intuitive self-awareness doesn’t stretch naturally to cover our scientific discoveries. As Jerry Fodor says “there had… better be somebody who is in charge; and, by God, it had better be me.”

I’ve written some posts (1, 2) specifically on why this glitch occurs but I think it will take a long time for our intuitive sense of our selves to catch up with what we already know.

And I guess I ought to write the post I promised back last April. I’ll call it “Revisiting ego and enforcement costs”. Happily it seems even more interesting now than it did then, and it ties together the philosophy of mind themes with some of my thinking on economics.

Meta: Patterns in my posting (and my audience)

I’ve been posting long enough, and have enough reaction from others (mainly in the form of visits, links and posts on other blogs) that I can observe some patterns in how this all plays out.

My posts cluster roughly around three main themes (in retrospect, not by design):

  • Economic thinking, informed more by stochastic game theory than Arrow-Debreu style models
  • The social impact of exponential increases in computer power, especially coupled with statistical modeling
  • Philosophical analysis of emergence, supervenience, downward causation, population thinking, etc.

These seem to be interesting to readers roughly in that order, in (as best I can tell) a power-law like pattern — that is, I get several times as many visitors looking at my economic posts than my singularity / statistical modeling posts, and almost no one looking for my philosophical analysis (though the early Turner post has gotten some continuing attention).

I find the economics posts the easiest — I just “write what I see”. The statistical modeling stuff is somewhat more work, since I typically have to investigate technical issues in more depth than I would otherwise. Philosophical analysis much harder to write, and I’m typically less satisfied with it when I’m done.

The mildly frustrating thing about this is that I think the philosophical analysis is where I get most of my ability to provide value. My thinking about economics, for example, is mainly guided by my philosophical thinking, and I wouldn’t be able to see what I see without an arduously worked out set of conceptual habits and frameworks. I’d enjoy the kind of encouragement and useful additional perspectives I get from seeing people react to the other topics.

Reflecting on this a bit, I think mostly what I’m doing with the philosophical work is gradually prying loose a set of deeply rooted cognitive illusions — illusions that I’m pretty sure arise from the way consciousness works in the human brain. Early on, I wrote a couple of posts that touch on this theme — and in keeping with the pattern described above, they were hard to write, didn’t seem to get a lot of interested readers, and I found them useful conceptual steps forward.

“Prying loose illusions” is actually not a good way to describe what needs to be done. We wouldn’t want to describe Copernicus’ work as “prying loose the geocentric illusion”. If he just tried to do that it wouldn’t have worked. Instead, I’m building up ways of thinking that I can substitute for these cognitive illusions (partially, with setbacks). This is largely a job of cognitive engineering — finding ways of thinking that stick as habits, that become natural, that I can use to generate descriptions of stuff in the world (such as economic behavior) which others find useful, etc.

In my (ever so humble) opinion this is actually the most useful task philosophers could be doing, although unfortunately as far as I can tell they mostly don’t see it an important goal, and I suspect in many cases would say it is “not really philosophy”. To see if I’m being grossly unfair to philosophers, I just googled for “goals of x” for various disciplines (philosophy, physics, sociology, economics, …). The results are interesting and I think indicate I’m right (or at least not unfair), but I think I’ll save further thoughts for a post about this issue. If you’re curious feel free to try this at home.

Networks of knowledge

My recent metaphysics post touches on a question I’ve been thinking about for some time: How can we judge whether a given domain of inquiry or a theoretical proposal is credible or not? Of course this is a very hard question, but I think we should pay more attention to an aspect of it that can give us at least retrospective insight.

Some domains were once very important, but have completely lost any credibility — for example astrology. Some domains have been losing credibility for a long time but haven’t been completely written off — for example Freudian psychological theory. Some domains seem quite credible but are being vigorously attacked by investigators who are themselves credible — for example string theory. Also, in some cases, proposals that were broadly rejected were later largely adopted, sometimes after many decades — for example continental drift, reborn as plate tectonics.

Philosophers of science, and many epistemologists, mine these historical trajectories for insight into the broader question. There are diverse approaches to explaining the success or failure of various theories and research programs. However I think it is fair to say that the vast majority of these attempts are “internalist”, in the sense that they focus on the internal state of a research program over time. Different approaches focus on formal characteristics of a sequence of theories, social and historical factors, methodological factors, etc. but almost all accounts assume that the answer is to be found within the research program itself.

I’d like to propose a different perspective: We can judge the health of a research program by its interactions with other research programs. As long as a research program is actively using and responding to results from other domains, and as long as other domains are using its results, or working on problems it proposes, it is healthy and will remain credible. If it proves to be incapable of producing results that other domains can use, or even worse, if it stops responding to new ideas and challenges from external research, it is on its way to becoming moribund.

Looking back at the historical trajectories of many research programs, this criterion works quite well. It is not very hard to see why this could be the case. Investigators in any given domain are constantly making practical judgments about where to spend their effort, what ideas proposed by others they should trust, etc. (Kitcher discusses this point in detail in The Advancement of Science.) Investigators who might take advantage of results from a given external domain have a strong incentive to make accurate assessments of whether those results can actually contribute to their work. Furthermore, they have a lot of information about how reliable, relevant, and easy to use a given result is likely to be (compared, for example, with an historian or philosopher). So if a research program isn’t generating useful results, its neighbors will sense that, and will have strong incentives to accurately reflect their judgment in their research practices.

However I think the implications are actually much deeper than these obvious (and probably valid) factors. For example, the trajectories of research programs are often dramatically shifted by new techniques that depend on external results. Plate tectonics became became dominant through a dramatic shift in opinion in 1966 largely as a result of improved measurements of magnetic orientation in sea floor rocks. Paleontology and archeology have been dramatically affected multiple times by improvements in dating based on physics. Evolutionary biology has been hugely reshaped by tools for analyzing genetic similarity between species. Etc.

Such shifts open up major new interesting questions and opportunities for progress. But they are much less likely to occur in domains that, for whatever reason, are cut off from active interchange with other research programs. Also, some reasons why a domain may be cut off — the desire to protect some theoretical positions, for example — will also tend to cause internal degeneration and ultimately loss of credibility.

More generally, my criterion reflects the fact that all research programs exist within a network of related activities — technical, intellectual, educational, etc. — without which they would wither and die. In essence, I’m advocating taking this network, and its changes over time, more seriously.

This criterion doesn’t engage in any obvious way with the usual question “Are these theories true?” (or at least, becoming more true, if we can figure out what that means). I’m not even sure that I can show that there is a strong connection.

Possibly this indicates that my criterion is fatally flawed. Or possibly it means I should look harder for a connection. But I suspect that this actually means that the idea of “truth” does not work very well at the scale of research programs. If a scientist is reporting experimental results, “truth” may be a very appropriate criterion, especially if we are concerned about fraud or sloppiness. But in these larger issues we should probably try to sharpen our criteria for pragmatic usefulness, and not waste time arguing about truth.

Metaphysics that matters

I find questions about supervenience, the disjunction problem, etc. fascinating. I think at least some of these questions are very important.

But non-philosophers I know find these questions supremely boring — typically just pointless. These are people who find current “hard problems” in cosmology, quantum physics, mathematics, neuroscience, etc. interesting, even though they aren’t professionally involved in those fields. So why not philosophy?

Esoteric questions in other disciplines always seem to be connected to issues that make sense to non-experts. The dynamics of a probe D3-(anti-)brane propagating in a warped string compactification bear on whether there’s life after the big crunch. But technical problems in philosophy often seem disconnected from issues that matter to non-philosophers.

For example, typical arguments for property dualism assign the non-physical properties such a thin, peripheral, technical role that no one outside of philosophy has a reason to care if philosophers decide that property dualism is true or false. Zombies in some metaphysically possible (but nomologically impossible) world might be a little more or less unreal, and that’s about it. Similar disconnects exist for many other hot topics.

A constructive response

Enough complaining. Here is a list of metaphysical questions framed to emphasize their major implications outside of philosophy. I briefly connect each question with the existing philosophical debate and with some examples of non-philosophical implications, but I don’t provide enough background to make this very accessible to people who don’t already know the philosophical issues. If you want more context, ask!

  • How should we think about the relationship of a coarser grained entity to its finer grained components?

    This is my version of the question of how mind supervenes on the brain, how macroscopic entities supervene on micro-physics, etc. To connect with any field outside of philosophy, we have to accept that coarser grained entities “exist” in some useful sense; the question is what sense.

    This issue is very important in many disciplines:

    • How do individuals make up institutions?
    • How do modular brain sub-systems interact in complex cognitive skills?
    • How do molecular level biological processes coordinate to maintain and reproduce cellular level structure?

    Every discipline addresses these questions in limited, specific ways. However I think most disciplines avoid dealing with them fully and explicitly, because we currently lack the conceptual framework we need to talk about them clearly, or even to know what should count as a general answer. If philosophy can shed any light on the general question, it will help people better come to grips with the specific issues on their home turf.

  • How does a coarser grained entity affect the behavior of its finer grained components?

    This is the question of downward causation, an important issue in the context of supervenience. Again, to engage other disciplines, we need philosophical discussions that accept that disciplines need to think about how coarser grained entities do somehow affect the activities of their components,. Philosophy can potentially provide a schema for handling specific cases.

    Real examples, parallel to the questions above:

    • How do institutions influence the behavior of the people who make them up?
    • How do skills or habits organize the behavior of brain modules?
    • How do cells regulate the molecular processes that maintain them?
  • How can we tell whether a proposed concept picks out a meaningful aspect of the world, or not?

    This is typically discussed as the disjunction problem in philosophy. A recent example was the debate in the both the astronomy community and the public sphere over whether Pluto was “really” a planet.

    The deeper questions behind any specific disciplinary debate are:

    • Is this choice of terms arbitrary (perhaps socially determined), or do some terms actually “carve nature at the joints” better than others?
    • Assuming there are terms that better fit the structure of the world, what criteria tell us that we’ve found them?

    These are hard questions, debated by most disciplines from time to time, as new terms are needed or old ones become questionable. But currently, there is no bridge between the related debates in philosophy over the disjunction problem and more generally the relationship between propositions and the structure of the world, and the needs of practitioners in the disciplines.

  • How should we handle dubious references?

    There are a number of ongoing struggles within philosophy about how to handle problematic references — for example, to Sherlock Holmes’ hat (I’m sure you remember what it looks like). The problem of course is that Holmes never existed so we can’t even say he had no hat. But in various ways similar problems arise for the entities referenced in counterfactuals (“If a large spider had been here, James would have run away”), theoretical entities of uncertain status (the very D3-(anti-)brane referenced above), and even perfectly normal mathematical entities (3).

    Again, the status of hypothetical entities, and even how to debate that status, is an important issue from time to time in most disciplines. For example, the status of the entities posited by string theory (such as the brane above) is a matter of extremely heated debate. The debate is not just about whether these entities exist, but whether it even makes sense to treat them as hypothetical. More violent disagreements along these lines arise in fields such as literary theory, for example.

    Disciplines must answer questions similar to those above, when confronting any given cluster of dubious references:

    • How should we decide whether these references “work” well enough to be worth using?
    • What can we do to make them into respectable references, or alternatively discover that they should be rejected?

    And again, philosophy has an opportunity, if it chooses, to help disciplines make these judgments by finding ways to translate whatever insights can be derived from its internal debates.

So what?

Questions like these now fall into a no-mans land. The specific disciplines where they arise aren’t professionally concerned with the broad questions — they just want to resolve a specific problem and move on. Philosophy, which seems to be the natural home for these broad questions, appears to largely ignore connections to examples like those that arise in other disciplines.

So I would argue that philosophy is missing a major opportunity here, and failing to contribute in ways that would make it a much more credible and important discipline. Whether or not the discipline of philosophy as a whole addresses these questions, I think they deserve attention, and I plan to work on them.

Why programmers understand abstractions better than philosophers

In an interesting post Joel on Software discusses many examples of specific “leaky abstractions”, and how our daily work as software developers requires us to understand, not just the abstractions we work with, but the underlying mechanisms that maintain them (and that fail in some cases).

I’m sorry to say that in my experience, philosophers tend to treat abstractions as though they behave as defined, and tend to ignore the actual mechanisms (typically made out of human cognition and social interaction) by which those abstractions are maintained.

As a result, they don’t seem to have good conceptual tools for dealing with the inevitable “leakiness” of all abstractions. Of course an implicit point of Joel’s article is that our abilities to maintain abstractions allow us to ignore this leakiness most of the time — and some disciplines, such as mathematics, for example, have suppressed the leaks very well. However I think we could find ways in which even the best understood mathematical abstractions leak, though I won’t argue that point here. It would be interesting to rank disciplines by how well they manage the leakiness of their abstractions.

Philosophy has a different relationship to leaky abstractions than most disciplines, because it is mostly about abstractions (rather than mainly about the things the abstractions describe). The inherent leakiness of abstractions raises substantive issues throughout philosophy. It can’t be regarded as a “boundary problem” that can be ignored in “normal” situations, as it can in most disciplines (except during rough transitions). Note that this need to pay explicit attention to mechanisms of abstraction also applies to computer systems design — good system design has to manage leakiness, and must accept that it cannot be eliminated. This is true for the same reason — system design is about abstractions, and only indirectly about the things those abstractions describe.

Language and populations

After posting my ignorant non-argument about philosophical theories of reference, I was happy to see a post on The Use Theory of Meaning at The Leiter Reports that seemed highly relevant, with abundant comments. Alas, on reading it I found that it was unhelpful in two ways. First, generally the arguments seemed to assume as background that the meaning of linguistic expressions was universal (much more on this below). Second, the discussion was obviously confused – by which I mean that participants disagreed about the meaning of the terms they used, how others paraphrased their positions, etc.

If the first problem is correct, then the whole discussion is fairly pointless. Furthermore I think the first problem creates the conditions for the second, because an assumption of universal meanings for expressions is so far from any actual situation in language that an attempt to base theories on it is likely to lead to chaos.

Here is a clear example of this assumption of “universal meaning”: William Lycan in Some objections to a simple ‘Use’ theory of meaning says “[A rule for the meaning of a name must be] a rule that every competent speaker of your local dialect actually obeys without exception, because it is supposed to constitute the public linguistic meaning of the name.” “A rule that every competent speaker… obeys” is universal in just the sense I mean.

Now, this simply isn’t an accurate way to look at how people actually use language. I hope any readers can see this if they think about some examples of creating and understanding expressions, but I’m not going to argue for it now – maybe in another post. I can imagine all sorts of responses: Chomsky’s competence stance, claims that we have to talk that way to have a meaningful (!) or useful (!) theory, statements that it is some sort of harmless idealization (of a different sort from competence), etc. However given the messes in the philosophy of language now which are (in my opinion) largely due to this background assumption, and the concrete results in linguistics and machine learning that show we can get along just fine without it, I reject any such claim. Again, I’m not going to try to substantiate these bald claims right now – but I’m confident I can, and the Steels paper in the earlier post is a good example.

As my earlier post says, what we actually have is a population. To take the story further, each member has dispositions (rules if you will) about how to use a term, how to compose terms to create more complex meanings, or decompose expressions to recover their meanings, etc. But the dispositions of each member of the population will in general be different in all sorts of ways from those of other members. There is no requirement that these dispositions be completely describable, any more than your disposition to shape your hand as you reach for a cup is completely describable – though they might be remarkably consistent in some ways. As a result, no matter how narrowly we define the circumstances, two members of the population will quite likely differ in some details of their use of expressions in those circumstances.

Even with no total agreement in any particular, language works because (again as mentioned in the earlier post) people can resort to context and can create more context through interaction while trying to understand or make themselves understood. This resort prompts us to adjust our usage dispositions over time to bring them closer together, when we find such adjustment helpful and not too difficult. However it also implies the meaning of any given expression may depend in an unbounded way on its context.

I’ll end this with comments on two related issues. First, even apparently consonant ideas, such as Wittgenstein’s “family resemblances”, typically embed the background “universal meaning” assumption. In Wittgenstein’s metaphor the word “game” refers to a particular family, held together only by those resemblances – but the family is treated as a universally accepted meaning for the term, albeit not conveniently delimited by necessary and sufficient conditions. My use of overlapping (and largely consonant) dispositions is not equivalent to this, as I hope is obvious, perhaps with a little thought. However of course overlapping dispositions can easily give rise to meanings that fit Wittgenstein’s “family resemblances”, and the relationship between two different speakers usage dispositions for a given term should perhaps be seen as a family resemblance.

Second, such things as Gettier problems and difficulties with vagueness seem to me to arise quite directly from this assumption of universal meaning. Given the context dependence of meaning in my proposed (very fundamental) sense, it is not surprising that unusual contexts induce incoherence in our intuitions about meaning. The interpretation of our claims that we’ve seen a barn will depend on whether the listener knows there are lots of fake barns about (and knows that we know or don’t know). A population with varying dispositions about the boundaries of Everest will produce something very like supervaluation, and our actual use of language will take that into account. And so forth.

Solving Kuhn’s problems with reference

Thomas Kuhn expressed grave doubts about whether the protagonists on opposite sides of a given scientific revolution even “live in the same world”. His doubts were based on many historical examples where the opposite sides disagreed deeply about the reference of important terms that they shared. In later essays he emphasized that the participants could understand each other, and rationally make choices about what theory to adopt based on reasonable criteria, but he never gave up this fundamental concern.

Kuhn’s examples from scientific revolutions are especially well documented cases of the sort of shifts of reference that occur all the time in our language. Kuhn’s analyses make clear the stakes: if we want to understand how beliefs can and should change, we need a concept of reference that adequately supports these kinds of changes, and that gives us some basis for judging whether the changes are appropriate.

We can approach the needed concept by observing that a reference can’t be understood in isolation. A speaker’s reference can’t succeed unless it picks out a referent for some listener. Success depends on a shared context; typically others understanding references or making them, but in some cases a context of the same person’s earlier or later referencing. (As a software developer, I am very familiar with the need to understand my earlier self’s choice of references – “When I named this variable, what was I thinking?”)

The typical philosophical definition of reference elides this need for context. For example, the Stanford Encyclopedia of Philosophy begins its article on reference by saying “Reference is a relation that obtains between expressions and what speakers use expressions to talk about.” The article continues with an exploration of the relationship between expressions (typically words) and things; speakers drop out of the picture. And listeners were never really in the picture.

The causal theory of reference in a way admits this need for context, but quickly jumps to defining rules for using very stereotyped historical context to pick out the correct target of a reference. In effect, the elision performed rhetorically in the article is performed formally in these theories of reference – speakers and listeners are simply carrying out their formal roles (more or less perfectly). Reference depends only on the very narrow aspects of the situation specified in the formal account.

This may work as an approximation to the use of some kinds of references in stable, unproblematic situations, but it fails badly in situations where the references of terms are unstable, murky, and being negotiated – that is, in precisely the sort of situations Kuhn documents, and more broadly, the sort of situations where we need to understand how reference actually works! This seems to me to be a philosophical case of looking under the lamp-post when we know the car keys were dropped in the alley.

Other philosophers have taken this goal of a formal account of reference as valid, but then either deemed it “inscrutable” (Quine) or vacuous (deflationists). In either case, we are left with no useful tools to interpret or judge situations involving reference conflict or change. This would be a case of denying that there ever were such things as car keys, or if there were, that they did anything useful.

I think Kuhn suffers from a related problem. He seems to have felt that formal accounts fail in scientific revolutions (as they clearly do), and that only a formal account could explain reference. So during revolutions something very mysterious happens, and people come to live in “different worlds”. Within each world reference could perhaps be formally described; across worlds formal descriptions break down.

Let us use our original intuition to take the analysis of reference in a different direction. “Reference is a relation that obtains between expressions and what speakers use expressions to talk about.” If a speaker is using an expression to talk about something, then there is a listener (at least hypothetically) who will have to understand it (correctly) for the speaker to be successful. Correct understanding, in this sense, means that it picks out an aspect of the world that satisfies the purposes of the speaker. This is the minimal context of reference.

Considering this context leads to a view of reference which is neither formal nor useless. Expressions in context do refer. Given enough context, people are very good at figuring out what a speaker refers to. But the aspects of the situation that could be relevant are unbounded and the process of figuring out can’t be specified by a fixed set of rules. In particular, if participants find that their ability to refer successfully is breaking down for some reason, they resort to searching for potentially relevant new aspects of the situation, and arguing about what sort of rules should be followed – behaviors characteristic of scientific revolutions, as well as other episodes of belief and meaning change.

In this account, even though no complete formal account of reference is possible, reference is not mysterious, opaque or incomprehensible. In fact, the process of achieving consensus on the reference of terms is simple and robust enough that it has been implemented in populations of robots. In these experiments, each robot did live in a “different world” – each robot sensed and classified the world in a unique way. Furthermore, there was never a fixed procedure for them to negotiate agreement on which terms to use. But very quickly, through rough pointing and a desire to coordinate their references, the whole population converged on common terms, often after some disagreement (i.e. different subgroups temporarily agreed on different terms). Under some circumstances (such as a large influx of new robots who didn’t know the consensus terms) references again came “up for grabs” and sometimes got reassigned. None the less, the terms used during any given period of consensus did refer; a speaker could use an expression to pick out a piece of the world, and a listener would reliably guess the correct referent.

In some sense, I am pointing at (but by no means adequately setting forth) a causal theory of reference. However the causal process is one of approximate (and shifting) consensus in a population of speaker/listeners, not one that can be stereotyped as “dubbing” or reduced to any other formal account based on specified factors. I hope my description gives some sense of why I think this addresses Kuhn’s concerns. In addition it provides a natural way to address issues of vague reference, reference to fictional entities, etc. — but such bald assertion needs some followup which won’t be forthcoming right now.

I have a sense, however, that this account would be profoundly unsatisfactory to most philosophers who are concerned with reference. If such a philosopher finds it unsatisfactory (in its goals and means, not its current state of extreme sketchiness), my question is Why? Why have philosophers spent so much effort on such problematic formal approaches, when a relatively simple account based on actual (and simulated) use of reference will do the job?

What’s wrong with Stephen Turner’s Social Theory of Practices

In The Social Theory of Practices: Tradition, Tacit Knowledge, and Presuppositions (1994), Stephen Turner mounts a sophisticated attack on the idea of “social practices” as some kind of supra-individual entities. (I will call these “coarse grained entities” below, to avoid the value laden implications of “higher level” or similar locutions.) This attack is part of a broad effort to support methodological individualism and attack any theory or evidence that contradicts it.

This problem is important in its own right, but it gains additional significance in the context of population thinking. If “only the population is real” then should we regard claims about coarse grained entities as fictitious? Of course my answer is “no”, but this requires careful analysis. We want to develop a firm distinction that allows us to adopt a realistic ontology for these coarse grained entities, but reject any treatment of them as abstract entities that somehow exist independent of the population.

Turner’s book is worth a response because it is a relatively clear and thoughtful examination of the argument against supra-individual entities. Analyzing Turner can help us figure out why he (and the methodological individualists in general) are wrong, but also can bring into clearer focus the nature, dynamics, and importance of coarse grained entities.

More below the fold

Learning syntax

This paper by Elman does a good job of showing two things highly relevant to the philosophy of mind (as currently pursued):

  • How statistical learning can acquire compositional structure, and
  • How structural properties of language can be learned without innate syntax.

I see that Gary Marcus has criticized Elman from a (more or less) Fodorian perspective, but Elman has been able to generate exactly the results that were supposed to refute him. The pattern seems to be that critics assume connectionist models that are much weaker than we can actually build today, and much weaker than the facts of human biology and learning would suggest.

Can we declare the poverty of stimulus argument dead now?