Wednesday, October 31, 2007

The Sleep-Synapse II

I received some much-appreciated and very reasonable criticism regarding my last post on sleep. And yes, it's radical to compare sleep to suicide. The real difference lies less in the fact that you have to sleep and don't have to die (both are equally inevitable, which is to say, almost but not quite completely), and more in the fact that sleep performs a much more ambivalent terminal function.

Specifically, it doesn't just annihilate the short-term memory/consciousness; it entrenches this consciousness and memory into the long-term. So although many aspects of my current state might not make it through, a number of core ones do. And these aren't selected randomly (although they are selected somewhat unpredictably). Focus, emotion, repetition, etc., make things more likely to persist.

A system of zero complexity is equivalent to a system of complete complexity, in that both contain zero information. The universe begins and ends with zero as entropy runs its course. An episode of consciousness between sleeps accumulates more and more experience. Continuing this process of accumulation, without cutbacks, without selection, is overwhelming to the point of shutdown.

You can submerge your sorrows or solve intractable problems (perhaps by obscuring important elements!) through sleep; sleep can force a necessary simplification when the world is qualitatively too much with us. Perhaps also the world can be too much with us through brute continuation and quantity. And perhaps we need sleep to deal with this problem in the same manner.

Fear of Observation

The unsettling nature of Newcomb's paradox indicates that suspicion about observation isn't just for paranoid schizophrenics. This game is paradox because of the perfect relationship between past observation and present prediction, a situation which (we're pretty sure) can't occur in the real world.

But observation can move closer and closer to successful prediction. If this seems to contradict my earlier post about irreducible complexity, it's because I'm now requiring only predictions based on generalizations from patterns in data, not explanations for the phenomena in question. This modification lowers the epistemic threshold low enough for contemporary computers to cross it--and then succeed spectacularly.

Datamining allows computers to predict, better and better, future outcomes within the field described by the data. It internally solves the problem of complexity by evaluating only digitized data points triggered by more complex behavior, then predicting which data will be triggered in the future.

Insofar as "important" aspects of people's lives can be translated into these data points, the results are deeply disturbing. Snapshots of social interactions can predict, with shocking accuracy, the future of relationships (student-teacher, dating couple, etc.) for months and years to come. Reducing behavior and faces to easily coded elements of a dataset works.

This predictive capacity brings the problems with free will uncomfortably close to home. Computers know better than you do what you are going to do in the domains they understand. And they understand almost every domain that you consider important.

Predicting human behavior deductively from systems logic failed. There is no perfect economic, social, or political human and its probably impossible to find anyone who represents a perfect synthesis of the three. But predicting human behavior inductively works all too well, suggesting that the regularities are out there, even if we lack the tools to explain them.

Humans are helpless to fend off incursions into traditionally private spheres. AT&T knows about your telephone conversations and the calling networks in which you play a small part. Facebook knows who your real friends are, who is really stalking whom, and when relationships are going to develop. Amazon, Google, and countless others know what you're about to buy.

A seemingly unique set of preferences that fit into categories encompassed by datamining will no longer feel authentic or interesting. In some ways, the fight for subjectivity against capabilities of mass prediction must involve (subjectively) meaningful activity that cannot be encompassed in a data set or made analogous to analogies by others. This activity can occur in groups, but these groups must shatter mechanical and corporate categories for prediction. As individuals succumb more and more to statistical control, the important thing is the act which makes true difference.

The capacity of datamining to predict the behavior of individuals reveals more than anything else the insight of Deleuze's take on eternal return. Those differences which can be subordinated to the taxonomy of a databank fail the test of true difference, for they can be subsumed, accounted for, explained, and defused. Only a reconfiguration of the past that operates autonomously beyond mundane configurations of history and personhood can escape the neutralizing power of prediction.

Thus we should be unafraid of observation and unprotective of the private--as long as we are willing to act in ways that sever the inevitably link between observation and prediction. In a capitalist world growing ever more efficient, prediction and control go hand in hand. As predictions become better and better, we must ask ourselves which aspects of life we are willing to cede to control... and which we will make autonomous.

Disciplinary space through Deleuzism

Science class has talked a lot lately about disciplinary space and boundaries created between disciplines. This intersects with some stuff social studies tutorial regarding systems: specifically, interpenetration therebetween.

Earlier I used a metaphor of disciplinary space linking planes (in which disciplines are horizontally articulated) via a vertical chain of causality. Although I think of this model as 3D, it actually only presents a 1D explanation for the relationships between disciplines. All of the disciplines (let's say n of them) are arranged in a line beginning with physics and ending with literature. This is clearly the model E.O. Wilson uses when he says that all disciplines except physics (n-1 of them) can be subsumed by an antidiscipline preceding them in the chain of causality.

This is silly. Mechanical causality is not the only way in which areas of study can relate. Abstract models often spill over in surprising ways... e.g., mathematical into philosophical (Badiou), thermodynamical into social (Luhmann), physical into psychological (quantum theory of consciousness). Each discipline fosters a certain type of logic which simultaneously distinguishes it from others and enables comparison to others.

As a consequence, I'm willing to say that a more appropriate model would include a plane for each disciplinary intersection: that is, a universe of n-1 dimensions (!). What's interesting about this space is that it not only contains massive internal variation, but that n(-1) itself changes along with fluctuating disciplinary borders... and since this process occurs slowly, we can envision a gradual pulling-away into a new dimension, as the new entity stakes out boundaries (and in doing so, defines its relationship with the others). Dimensions can also disappear as others make them redundant; that is, as they contain no new information. A three dimensional world existing only as an infinite extrusion of two dimensions would functionally still be a two-dimensional world.

This model requires us to imagine spaces in which dimensions themselves maintain uncertain ontological status... but doesn't this provide the best model of the uncertain and shifting boundaries of the academic universe? The most stable dimensions correspond to the most stable disciplines, and the most robust interactions are among the entities populating these planes.

Note that if n-1 dimensions define the interdisciplinary relationship instead of the vertical line, these dimensions actually branch off into planes... or maybe even higher-dimensional spaces depending on complexity within the discipline!

Discipline space probably closely resembles concept space, organized on the Deleuzian "plane" of immanence. Thinking these spaces is the big upcoming challenge, but even the beginnings of doing so seem rewarding.

For example, this model presents the tantalizing suggestion that insofar as (this is important) disciplines can be reconstituted according to their predecessors in the vertical chain, they can in fact be explained according to any discipline. English as economics, sociology as theoretical physics, anthropology as logic. Luhmann points the way, but attempts to reterritorialize everything onto subjectivity and his particular brand of autopoeisis. There's no inevitable frame of reference for describing any phenomenon; only frames of reference staking out territories in relation to each other.

This observation makes the boundary all the more significant... whereas formerly the clear articulation of boundaries might have seemed trivial or unnecessary due to obvious divisions of subject matter, now boundary divisions are the only way to isolate zones of intensity and self-referential complexity... to prevent incursions from alternative explanatory paradigms which left to their own devices could cannibalize anything.

More on this, or a refinement of it, or a retraction of it ... to come later, when a future self thinks about it in new frames and with new resources.

Tuesday, October 30, 2007

The Sleep-Synapse

Sometimes "I" feel greedy for more time awake because I know that sleeping, in entrenching long-term memories, will terminate or at least subdue some of the themes and processes raging in my (possibly tired) mind. Tomorrow morning I'll wake up and still be the "same person"... but in the process of waking up, I'll first have to figure out who I am (college student), where I am (bed, room), then what I'm about to do (shower). My reconstructed mental state will, after a reasonably long sleep, bear little resemblance to that of the night before.

A related phenomenon occurs with my material and electronic surroundings. There are some things in my room whose location I currently know and organization I currently understand which tomorrow will be mysteriously hidden or chaotic. Every night I accumulate a series of Firefox tabs which I have open for a certain reason. Tomorrow, they will lose meaning and perhaps become irrelevant; they may survive for some time, but usually often as pure nuisance.

Thus I must attempt to communicate with my (short-term) future self through a complicated system of alarms, bookmarks, calendars, and notes, or risk leaving the meaningful reproduction of my present consciousness to some unpredictable and unconscious mental processes. Even these precautions can only increase the likelihood of successful communication. I still can't shake the sense that when I go to sleep, I die forever, leaving behind only memories and souvenirs.

Yes, I attach an unusual heaviness to the act of letting myself fall asleep. Sometimes I stay up late or attempt to sustain myself with short naps which seem less final. And sometimes I cannot bear consciousness any longer and welcome the long sleep. This decision differs more in degree than in kind from suicide. I am not prepared to reduce myself to that level--the level at which I continue only through my written and social unconscious, violently excluding my more immediate neurological and material unconsciouses--but I do not condemn those who are.

Friday, October 26, 2007

Interdisciplinarity

[Disclaimer: I didn't really plan this out beforehand; it's just a movement of ideas, beginning a with a particular thing. I didn't expect it to end up where it did, but I can't say I'm surprised.]

Context: Deleuze & Badiou, plus E.O. Wilson's claim that sociobiology is the "anti-discipline" of the social sciences. Wilson thinks that disciplines ultimately lose power to subsuming reductionist disciplines which can better explain the phenomena in question. He argues that this overcoming, though not really the historical norm, is an inevitable historical end because the all processes ultimately emerge from the same physical laws and causal mechanisms, even if articulated in complex and novel ways.

The problem, of course, is no computer (or by extension, model) can encompass all of reality at once. Practically, because of the difficulty of gathering sufficient data (and this may itself be completely insurmountable), and logically, because the computer would be in the universe but not able to completely represent itself (without generating a paradoxical/impossible infinite recursion).

Disciplines emerge somewhat haphazardly, and probably maintain a great deal of institutional inertia which rigidifies boundaries between them. But even in a system without these commitments, disciplines would have to grow horizontally in the "middle" of the vertical hierarchy, then make connections with their neighbors secondarily. Practically, it's easier to observe and systematize the study of phenomena at a given level than it is to determine all the underlying causal factors for those phenomena.

The (historically real, probably inevitable in the long run) temporal limitations on human life create severe problems for interdisciplinary studies--more so than for knowledge in general. Single fields constantly differentiate into smaller fields whose contents individuals can master. Exploration in these fields tend to take for granted certain concepts ("black boxes!") from areas of disciplinary past or interdisciplinary spillover which enable their particular work.

The intellectual with an interdisciplinary urge faces an irresolvable vertigo (within metaphorical space organizing disciplines along an up-down axis probably corresponding sort of to a big-small axis--more on this later?). "Understanding" any point in this mental space, even "only" to the level made possible by current human knowledge, requires understanding a cone of influence (expanding horizontally as vertical distance increases). Causal factors ripple up and down this cone, altering the status of its (arbitrary) nexus. The paradox: to understand anything, for any duration of time, one must understand (almost) everything -- an infinite set of factors increases with vertical distance and time.

Cannot this despairing interdisciplinary thinker attempt to make reasonable simplifications of the relevant factors? One returns to the problem at a second order: the near impossibility of knowing which simplifications optimize the explanation at hand without knowing what each simplification excludes and what each includes. Even with huge simplifications, one simply cannot come close to knowing enough to really understand... anything. There's simply not enough time to keep place with the relevant progress of every discipline; but if your knowledge within a particular discipline is outdated or inaccurate or oversimplified to too great a degree, the entire vertical chain of understanding falls apart because of this weak link.

Reassuring but maddening is the contemplation of the body of human knowledge, as a whole, in the abstract. Its tempting to think that an entity able to consciously process this cosmos would be able to produce fantastic insights at every level. To some extent, one can think of the collective unconscious as performing precisely that process; thus learning more things and producing more linkages provides a perspective which leaves most important things out; but other people know those things, and their knowledge might be linked with some of the things you know about. We know in the abstract that the links are being made.

But even more exciting is the idea that all these links coexist in closer proximity than ever before within the internet. And with the internet's accelerating moves towards integrating humans and human knowledge... who knows what truths may emerge from a proliferation of
linkages simply unavailable to any one human being? Theories of culture/society as brain lacked credibility because of the gaps in the transmission of information and because of the relative salience of the individual subject as perceiver; the internet is resolving these issues by increasing the proximity of ideas and the speed with which they move and relate.

Internet consciousness should thus provide some hope for the intellectual unable to comprehensively understand the currently available mental universe--the process of creatively generating new linkages in parallel with many others like you may yield an entity which can understand.

Vicarious enlightenment.

Tuesday, October 23, 2007

The Black Box

I feel the need to write about an interesting coincidence in two readings for different classes (Berger & Luckman and Pinch). Both develop a Weberesque thesis of routinization in which (a) society condenses a set of practices or ideas into a practical, epistemological, or symbolic object. The authors call this process "objectivation" or "black-boxing," respectively. I think it might be productive to consider this phenomenon in contemporary society in light of the discipline-to-control transition. [Another take on the same Deleuze essay - without director reference to my prior one.]

The relatively isolated and independent knowledge contained within the black box should give way to the interpenetrating flows of information characteristic of control. Instead of building blocks which can be assembled into a larger structure (which can proceed in the direction of being made into one larger box or of having internal boxes examined individually), we should see chunks of code with unstable boundaries, continually modulating each other's meanings.

The transition from disciplinary institutions to control society is itself a large-scale example. Discipline produces institutions as self-contained objects--the school, the factory--and does the same for its human products--student, worker. Control societies offset the disintegration of these totalizing but regional domains of meaning with the establishment of a larger regime of capital and information. Enclosures of disciplinary sites create distinct but analogous objects; flows of control society subordinate the qualitative to the quantitative, infusing institutional objects with outside forces.

The digital nature of control is essential. Social humans need black boxes to reduce cognitive load and maintain mutual understanding. Human communication requires shared norms of all kinds: a spatial arrangement of the participants that all accept, the use of concepts (let alone a language) that all understand, and general agreement about the goals directing the interaction. Computer communication, although mediated to some extent by programming language, is fundamentally easier, since complete information about all entities within the communicative situation can be made available by design. In a digital system, scanning the ones and zeroes that make up a component provides complete information about its behavior.

This property of computers applies to digitally conceived (or, relatedly, cybernetic) models of society. Control reduces elements of the system, including people, to quantitative measurements and decision points with all possible outcomes given (and probabilities calculated); this ontology decreases the necessity black boxes, since the entire system is available for prediction and control.

It is revealing that the black boxes of computer programmers are subroutines. Subroutines help programmers by allowing them to view their code at abstract, more symbolic levels; in this sense, they resemble the old black boxes. Yet computers generate a system of perfect exchange between the name and content (signifier and signified) of the subroutine; variables in the subroutine take on the values given in the main stream of code. The old black boxes, the black boxes of discipline, were imperfect because the object as molded only approximated the needs of the situation at hand. The processes behind its objectivation contained extraneous --perhaps even counterproductive--elements which were difficult to modify situationally. Control society eliminates this "deep" historical dimension of the object by bringing everything of importance to the surface.

Consider linguistics. Whereas Chomsky employed a semantic black box for words (and ignored the semantic effects of syntactic transformations), contemporary linguistics uses computers to generate semantic probabilities; although this has not yet resulted in a Turing-worthy computer, it has enabled the successful analysis of texts (and their relationship to others) using a completely superficial approach. Instead of building a language out of quantized blocks, digital incursions on semantics will continually produce models which get better and better at approximating (and then surpassing) human capabilities for significations. The question of when computers will achieve a moment of "true" or "knowing" reference is irrelevant because it presumes the elimination of a black box which functionally no longer exists.

In psychology, a similar phenomenon: the downfall of a hegemonic categories like "the mother" or "Oedipus" providing organizing principles for the full spectrum of human activity. Instead: the statistical influence of a particular type of mothering, the probability of voting for the same political party as your father given x, y, and z.

A tendency towards "theory" bringing together useful aspects from a wide range of works, proudly independent of a foundational set of categories to be assumed. "A Marxist analysis of" x built into a larger, more autonomous theoretical edifice... or at least, explained without taking for granted the importance, validity, and common knowledge of the conceptual tools it employs.

[It seems as though I'm cutting down on verbs. I guess I'm tired.]

Political science trapped by its own black boxes. Increasing reliance on statistical analysis with a seemingly ability to peer very deeply into black boxes of "culture" and "institutions. " Result: political science is not on the predictive forefront of the academy.

[That's definitely an overstatement and unnecessarily harsh. I'm losing it.]

Fiction may have anticipated by many decades the move from assembling black boxes to deconstructing them. In this sense, the high modernism of, e.g., Joyce is ahead of the modern disciplinary project. This comforts me... a little.

[[Edit immediately after posting: this is a pretty absurd claim, especially given the arguments made elsewhere in this post about interiority with respect to the human. It may be necessary to elaborate on a distinction between "flat" and "deep" interiority. There's a seeming paradox lurking here: a tension between the flattening of interiority and the invasion of the black box. It's important to keep in mind that the previous "deep" conception of interiority was one that had to be papered over through objectivation. But whereas the black box promises correspondence to a deeper interior, the flows of control seem more indifferent to this core because it does not underpin them--in fact, it is largely irrelevant to them. As before... this is an incomplete exploration!]]

Skipping straight to the heart of the 'hard' sciences... well, the Pinch article is about physics. What's physics like in the control society? I'm really not qualified to me. Does physics recognize and take into account the probabilities of its measuring instruments? How precisely does it replicate the digital system of perfect exchange? *Shrug.* I'm way out of my area here. Time to sleep.

Friday, October 19, 2007

Disenchanted

What do you do if your intellectual work reminds you--hour by hour, minute by minute--that there is no external source of authority for normative action? That philosophy can brilliantly tear apart the prescriptions of philosophy before it, but only weakly make recuperative attempts at new normative values (these will be the next wave's targets)? Assuming you have some basic psychological inhibitions in place, you can avoid suicide. Great. I don't want to trivialize that. But can we be comfortable with the complete lack of theoretical justification for normative action?

I've heard and thought about two related reasons why maybe we can: (1) We manage to make it through the social necessities of every life, just like everyone else. We can doublethink our way into (a certain degree) social cohesion by ignoring some deep facts about social reality and using the practical knowledge we have to get by. (2) We should be grateful of our ability to begin perceiving and understanding the world around us. The phenomenal universe, including others and self, is beautiful... or just complex... and perhaps therefore interesting. Let's just live and think about that universe.

These models are somewhat seductive for me. I do like trying to understand the world. But can reveling in that be enough? Short-termism corresponds to a particular political-economic-cultural complex, like any other philosophy. A number of diverse causes produce a similar outcome: humans passively engaging in advanced capitalism without out long-term or global perspectives on their behavior. The result? All the bad things that result from politically disinterested capitalism on a small scale: environmental destruction, wars, exploitation, etc. It's not enough to say "if I were in charge, I wouldn't be conducting wars or destroying the environment or exploiting Third World labor." Some other people would, and my apathy is all they require.

And yet, I can't summon a moral justification for interfering with other people? I'm willing to throw up my hands and default to libertarianism because I'm not willing to commit to a moral decision-making calculus? Pathetic. If I become an academic and teach in a framework like this, why do I deserve to be alive at all? Why disillusion other people when right now they want to go do things that intuitively I think are good but logically I can't justify?

In part because I have no more trust for the intuitive than for anything else any more, especially when it comes to reflection about the long-term and the global.

This isn't an idle academic question that I should just stop worrying about it. That attitude is part of the problem. The solution to this problem, provisional as it may be, determines how I act now to orient myself for the rest of my life. I need some better ideas. Greater minds than I have tackled this problem; recommendations of books that those people wrote would be greatly appreciated.

Wednesday, October 17, 2007

Control Society and Signification

[Uh... OK, here's my first take on this. It seems really inadequate to me now, so I'll probably revise it, or at the least, write an apology/clarification/improvement at some future time. As always, I'm interested to hear thoughts. If anyone reads this blog, which of course isn't something I take for granted. Heh.]

Deleuze's "Society of Control" essay has influenced me enormously. It's my favorite of the many (paradoxical) attempts to create a meta-narrative of the transition from modern to post-modern society. It's very short, so I strongly encourage you to read it. Basically, he argues that implicit in Foucault's work is a movement beyond discipline into control, which (postmodern) society increasingly realizes. Whereas discipline operates through enclosure and discontinuous transitions between heterogeneous spaces, control connects those spaces. It replaces the completion of segmented disciplinary elements (school, factory, barracks) with ongoing processes (education geared towards commercial success, business as education, computers tracking individuals with information that follows them wherever they go).

I want to focus on the particular idea that discipline is analog, whereas control is digital (or "coded." That is, disciplinary spaces resemble each other in that they use similar methods
for creating order and molding subjects, but they do so to different ends. In control society, spaces imposes codes that can be used or transformed for use elsewhere; individuals accumulate, e.g., educational capital through continuing education and through participation in "real life" capitalist, militaristic, or medical practice. In particular, I want to think about how processes of signification ("coding" in a more abstract sense) change when the exercise of power relies on a single, interchangeable code.

Deleuze explicitly says that control societies employ a numerical language. Presumably, we can contrast this numerical language with the more tightly contained and ideological languages of disciplinary spaces. To take the example of the school, discourses about refinement, virtue, personal development, wisdom, etc., give way to quantitative evaluations framed in terms of a continuum of achievement. This shift flattens the field of educational outcomes by proclaiming the possibility of ordinally ranking (or at least somehow quantifiably ordering) students on the basis of grade level, test scores, GPA, etc. In doing so, it lowers the barriers of the school to other spaces within society; it generates an objective basis for selection in corporations or other educational institutions.

I mean "objective" in a broader sense than might be obvious from that last sentence. Not only does quantifiability improve the efficiency of selection by enabling standardizing criteria; it also may increasingly correspond to the demands corporations make on employees. The type of skills required to successfully crush a culturally contingent and intellectually shallow standardized test might prove more useful in a modern company than would some ephemeral "deeper" wisdom. Regardless of whether this strategy is smart, it reflects a general change in the relationship of individuals to their codes:

Whereas discipline targets the self-contained mental universes of individuals, control increasingly bypasses individual meaning-making in favor of coding which can be completely processed externally (and thus, in the above sense, more "objectively.") If your school-numbers fluidly determine your spot in the corporate hierarchy (only to be altered by the improvement of your numbers through the accumulation of additional educational or corporate capital), there is no need to "socialize" you or force upon you the contrived ideology which used to uniquely characterize individual disciplines.

Thus we face the capitalism of Weber's worst nightmares; one in which all meaning has been drained from individual thought and reintroduced as not only instrumental but exterior. The bureaucratic-capitalist apparatus can sort you and move you around on the basis of your numbers regardless of your feelings on the matter. But in fact, you don't have feelings on the matter. The culmination of the process of disenchantment and destruction of meta-narratives is your shit hitting the material fan. You participate in capitalism because it's there and it's how things are done and you need it to supply your body. But scarier still is the increasingly materialist slant of knowledge about you.

Instead of psychoanalysis fixing your mind but drugs fix your brain. Architectures of control continually influence your behavior at a level outside your conscious power. As in the case of discipline, power organizes bodies in space in order to achieve certain ends; but now these ends are not psychological. Instead of using space to assist the inculcation of particular behaviors and values, space continually "modulates" human behavior, with no additional disciplinary control necessary.

Punchline: the shift from discipline to control entails a new relationship between knowledge, power, and individual. Whereas discipline exerts power over individuals by ensnaring them in a matrix of knowledge producing certain actions, control obviates the necessity for individual reproduction of knowledge. In fact, individual awareness of the knowledge employed by control may be counterproductive. Thus, in an challenging turn for those who object to ideologically influences on epistemology, control reduces knowledge to market-tested instrumental facts, predictive and statistically significant.

Tuesday, October 16, 2007

Dichotomy Dichotomy III: "Nexus" Question

I promise this is the last one of these... for now.

Reading Berger and Luckman's thought-provoking The Social Construction of Reality probably led me to refine an idea I had about dichotomous vs. continual spaces and institutions. Basically: encounters with social institutions elicit simplifying decisions about otherwise more continuous reality. A courtroom epitomizes this process. The jury filters information from complex situations outside and inside the trial, ultimately choosing from a fixed set of options to answer a fixed number of questions. The choice they make -- prototypically, "guilty" or "not guilty"-- then sets in motion a large number of processes which vary widely based on the answer but cannot be fully predicted from it. Let's think about this decision point as a socio-spatio-temporal nexus.

I'm didn't call it "socio-spatio-temporal" (just) to be pretentious. Rather, I mean to distinguish this type of point from from other points in space time. It's easy to see any point-moment as pivotal or nexus-like by virtue of its conical extension of influence backwards and forwards through space-time. I want to talk about moments characterized by (a) stark, irreversible, dichotomous and uncertain choices with (b) socially meaningful inputs and outputs. (This concept itself, like most I use, exhibits more properties of a continuum -- or "family resemblance category" -- than of dichotomous inclusion or exclusion.)

Examples in approximately descending order:
  • The outcome of an election (say, for example, a contemporary US presidential election). Diverse and large scale social processes cause a vast multitude of individuals to cast ballots for particular candidates. This in itself is a nexus of some salience (the impact of individual ballots is probably far greater for individuals voters than it is for the outcome of the election as a whole), but the election as a whole massively overshadows it. The electoral college algorithm is applied to the votes (which is not to say that some of these votes might not be fraudulent) to yield a single winner, who is then at liberty to blow up the entire fucking world four to eight years. The significance and complexity of inputs and outputs, combined with the absolutely dichotomous nature of the choice (it's a "winner-take-all game!") make this, for me, the single best example of the type of nexus I'm talking about.
  • The judge's decision in a debate round. An extended process of research and movement all comes to a head in the debate round itself, which does its own work in channeling these processes towards the decision point. Debates reify arguments and establish sides, reducing nuances and promoting allegiances between sets of arguments that prepare the way for the judge to make a final decision between the two sides. The point of the judge's decision becomes an even more significant nexus point (and therefore, more of nexus point) in elimination rounds. [Note to the uninitiated: debate tournaments contain preliminary rounds, usually eight, followed by elimination rounds. Everyone participates in preliminary rounds. The top group of teams (usually 32, but can be any power of 2) then begin another tournament, seeded by performance in preliminary rounds, in which only the winners of each round advance to the next.] Because preliminary rounds operate in conjunction to produce and order the set of teams advancing to elimination rounds, teams can compensate for losing one round by winning others. No such compensation is possible in elimination rounds; more rides on each decision. To extend the courtroom analogy, being in an elimination round is like being out of appeals.
  • A student's answer on a true-or-false test. A whole universe (world-historical, mathematical, or biological, etc.) is filtered by a particular educational configuration and then by a student, who must in turn make binary decisions about facts within that universe. I consider this a slightly less prototypical nexus situation than, say, a debate round, because it allows a larger temporal window for reversibility and because the capacity to compensate for wrong answer is higher. Put another way, the larger the number of parallel events, the less nexus-y an event is, since the statistical effect of an increasingly large sample size cancels out the dichotomous effect of the yes-or-no structure.
  • Word choice. Deliberately vague example, because I want to call attention to the broad spectrum of sample spaces for word choice, some of which exhibit "nexus" qualities quite well, and others of which barely do at all. At the former end would be the first (referential/unironic) decision an obviously ideologically charged term (like "fag" vs. "homosexual" or "terrorist" vs. "freedom fighter") in a socially important setting (like the first time meeting new people or a presidential address). Choices like these have relatively high consequences because of the psychological and social commitments they entail. At the opposite end of the spectrum would be the use of a seemingly unimportant term in a less emotionally charged situation. (If yesterday I once again told a close friend that I "loathed" economics, it won't matter much to either of us when today I tell her that I "detest" economics.)
  • A socially relevant coin flip (or any other gamble on a functionally random event) is almost there, but not really. This type of things defines the periphery of the category I'm trying to discuss, and the analysis below only sort of applies to it. This event only meets half of my criteria in (b) above, since the input isn't really socially relevant to the outcome, but the output can certainly be determinative... of, e.g., sides in a sporting event or debate rounds. The spatio-temporal visualization of this event (in the social context) would be only a single (forward-facing) cylinder of influence, not the double cylinder of past causes and future effects.

These nexuses fascinate me because they focus our attention on the operation of causality in the world. If parallel universes exist and we were able to classify them, our taxonomies would group them according to outcomes at these nexuses. (The "uncertainty" required by the definition enables the existence of parallel universes with counterfactual outcomes, although here I'm playing fast and loose with a correspondence between uncertainty for the observer and causal uncertainty. The assumption here is that the more uncertain a situation is for observers, the more likely it is that small changes that are relatively proximate/prior in space-time will be able to switch the outcome. This might not hold true in weird circumstances, like when causes well up from the distant past or, more mundanely, when observers just have bad information.)

These events may "bend" social behavior around them in predictable/generalizable ways:

More emotional charge surrounds nexuses than other moments in human experience. Even if the outcome of a nexus is less important than the total impact of an ongoing non-nexus process or behavior, temporal compression makes the nexus appear more salient. Indeed, in the short run, confronting a nexus makes more sense than contemplating longer-term patterns expressing themselves along continua... but this short-term tradeoff might paradoxically be considered irrational in the long run. The impact of the long-run behavior might not only be larger; it might produce a systematic bias on each short-run nexus.

Put another way (sociologically or politically, rather than psychologically), nexuses raise the stakes of social interaction which could influence them. maximize conservative and loss-averse behavior from entities reasonably invested in the status quo. Irreversible large changes elicit fear from those who stand to lose from these types of changes, whereas incremental change, even if it ends up in the same place, is less threatening, because of higher reversibility and lower uncertainty. Inversely, (although probably not proportionally, given what we know about general tendencies toward loss-aversion... not to mention revolutions in this millennium), the severely disempowered and disadvantaged are more likely to take risks to ensure a favorable outcome at a nexus moment; these moments afford unique leverage over the course of history, and an opportunity for large gains in an otherwise insurmountable situation.

Nexuses contribute to socialization. (Here's where some influence comes directly from Berger and Luckman). The a complex set of interacting social forces determine the outcome of a nexus; as a result, nexuses drive actors to interact with other relevant actors. The mere fact of interaction in itself furthers socialization, as involved entities calibrate concepts with each other and appeal to shared bases of knowledge in order to be persuasive. The decision-makers, as the ones motivating nexus-associated behavior, possess asymmetric control over the socialization process. Candidates have to cater to voters. A student taking a test concerns herself not so much with "objective" truth, but with truth as presented by the teacher. That these interactions are strategic amplifies the socialization, since fully integrating oneself into a relevant symbolic regime maximizes one's ability to affect the outcome. It seems to me that the most effective candidates with respect to a particular population, students with respect to a particular material, or debaters with respect to particular judges, are those who most completely think in the same framework as their audiences. But even if that
turns out to be completely not true, at minimum actors acquire intellectual and social resources that influence future behavior.

So, is this completely crazy? And if not, can y'all think of other general characteristics of nexus-defined situations? I bet there are a lot more (e.g., "thickening history"), but it's getting late and I sort of feel done with this entry. OK.

Monday, October 15, 2007

Dichotomy Dichotomy II

Continuing last post, here are some implications of the breakdown of the dichotomy dichotomy for social theory:

Ideal types aren't enough. They define the modes of a sample space, but not the terrain of that space otherwise. How dichotomous (or otherwise clear-cut) is the division of entities between types? Do all (or does each) correspond to a metaphorical line, cylinder, or hill? (Here I'm envisioning a three-dimensional sample space with y as quantity or salience and x and z as some relevant factors... but of course I've chosen this number of dimensions simply because they correspond to concepts most easily apprehended through statistical-visual representation. Any number of conceptual dimensions are possible, and these only increase the need to more carefully articulate a model's terrain.) Or something else? How quantitatively and qualitatively significant are the ideal types with respect to each other?


Dialectic may itself be an ideal-typical relationship between two entities, not a homogeneous phenomenon recurring everywhere. We should expect ideal, material, and ideal-material dialectics to function differently, and we shouldn't presume an equality of influence from each side. More on this later.

The Dichotomy Dichotomy

Let me just say: it's probably no longer cool to call something a false dichotomy. People in section -- inspired perhaps by late greats of philosophy or by someone they heard say something smart in another section -- are really into replacing dichotomies with continua. Gender, sex, politics, truth, whatever. I can't claim to be innocent here. It's an easy point to make and sometimes appropriate. But I think we need to dig deeper into the general ontological problem posed by the "dichotomy-continuum dichotomy." [That phrasing gives you a big hint about where I'm headed with this post.] Here are some things about that:

The terms "dichotomy" and "continuum" do obtain quite clearly in different situations.

Consider the difference between the operation of a computer at micro and macro scales. Fundamentally, computers use zeros and ones to encode information; one can analytically divide the system into identical units in exactly one of two states. An emergent property of certain computers, the graphical user interface displays colors which it can select out of an open space with continua as axes. (Note: given a consideration of the interface as such, it's irrelevant that one could eventually quantize this color space--find two colors with no third color in between--since the computer can articulate the color space more finely than human vision can apprehend it.)

One can also find examples of this hierarchical relationship reversed--i.e., of the continual loosely underlying the dichotomous. A coin's position as it travels through Newtonian space moves along a straightforward continuum, but when it lands it displays either heads or tails. Or take a chemical reaction requiring a certain activation energy (a threshold) to proceed. Incrementally increasing energy available will change the behavior of individual molecules but not enable the reaction until it crosses the necessary threshold.

But sometimes a situation makes it hard to decide between categorizations.

In the case of sex, scientists have distinguished between body configurations resulting from XX and XY with a high degree of empirical success. Yes, everyone has estrogen and testosterone... but the distribution of these hormones is statistically bimodal, not linear or a bell curve, and the mode to which an individual corresponds can be predicted with high (but not perfect) accuracy from an examination of chromosomes. (Of course, the X and Y chromosomes also possess no essential nature, just a statistical differentiation, but operating in the present and on the timescale of relatively slow biological evolution, we can clearly distinguish them in nearly all instances.) (As far as I know. I'm not a scientist, so I'm open to being corrected.)

We can imagine all sorts of phenomena distributed in statistical distributions somewhere in between the perfect bimodality of the coin flip and the perfect linearity of (a slice of) a computer's color space. These differences can even occur within a conceptual domain, like the distribution of resources in different societies or of grades in a class.

Does this create a self-referential paradox?

No, thank god (Hofstadter?). We've broken down the meta-dichotomy between dichotomies and continua... in favor of what might be a meta-continuum. But this doesn't contradict. The original argument doesn't reject these concepts entirely, it just contextualizes them and makes them more meaningful. Given a potentially infinite sample space (remember, we're talking about the sample space of sample spaces), it makes sense to call the distribution of models from dichotomy to continuum continual:

(a) We can continually provide examples of arrangements anywhere between the two extremes.
(b) There's no way to link two types of arrangements and demonstrate an inequality between their infinite extensions (the way there is for, say, rational vs. whole numbers).
(c) Therefore each division of the continuum (no matter how thinly you want to slice) could be said to contain an equal infinity.
(d) Therefore... perfect continuum. QED. (But then again, I'm not a mathematician either, so I could well be wrong about this stuff, too.)

This doesn't necessarily mean that you couldn't, by strategically limiting the sample space, generate a useful distribution of distributions. I'll hint at what that would look like for psychology at the end of the post.

Why do I care?

Implications for the rhizome: (1) Never forget that "there are knots of arborescence in rhizomes, and rhizomatic offshoots in roots" (Deleuze and Guattari, ATP, 20). The pure rhizome, if it exists, lies at the end of a continuum of concepts (and a continuum of things). (2) Recognize that perhaps sometimes arborescent ontology works better than rhizomatics; make it rhizomatic not by forcing the real situation into a bad framework, but by putting the tracing on the map and by plugging the tree into other apparatuses. If it's true that models operate on a continua, any tree formation will inevitably be interacting with (more) rhizomatic assemblages that lie proximate to it. This implies a non-arborescent (though not necessarily completely rhizomatic) plane of consistency. It might imply a relativisation of the extent to which situations map onto dynamical systems.

Implications for psychology: we need to think more carefully (or I need to read more about) the interactions between root structures (Gestalt principles and decision trees) and probabilistic networks of influence. Abstractly, it seems that aspects of conscious experience will result from large-scale statistical patterns in the brain... but there are dichotomous (and trichotomous, etc.) phenomena produced at the level of consciousness which must result from neural activity crossing a certain type of threshold. For example, the "faces/vases" optical illusion causes the brain to shift back and forth, but doesn't allow you to see 75% face, 25% vase. You recognize a face or you don't. Face recognition generally "clicks" at some point, after which an entity goes from being a human in the abstract to a human you know, in turn brining to bear a cascade of new information from your memory. It seems like it might be possible to examine salient psychological phenomena to determine generally how dichotomous and how continuous they were... Although you probably could do in the abstract what I did in the previous section, limiting the sample spaces to variables psychologists agreed were important might provide some predictive results. For example, if the distribution of distributions is strongly bimodal, this would provide the basis for a useful classificatory scheme of psychological phenomena based on a split into those two types. Yet another thing I'm not qualified to do!

Implications for section: think before you say "the reading really dichotomized this thing, but actually it seems like more of a continuum." You might be obfuscating more than you reveal.

Saturday, October 13, 2007

Obedience to Milgram

Now that I've seen the Milgram experiment more times than I can count, I have a very different outlook on it. In particular, I feel very differently about the main message: that anyone can become a killer if placed under the proximate influence of an authority. It's an important result, but what's more important to me is the type of authority that has to be configured in order for the experiment to work.

Not only do different people relate to authority differently (and in ways that can't just be expressed on a continuum of "obedience," the same people can relate to different authorities differently. Don't confuse science with complete generalizability: The Milgram experiment doesn't produce "authority" and "obedience" in the abstract or in a vacuum... and it would probably be impossible to do so. The subjects of the experiment would have reacted quite differently if the order to kill had come from a Nazi/fascist authority instead of the culturally specific array of forces created by Milgram:
  • Legal - In some ways, the most obvious type, and the one most closely resembling the Holocaust's enabling conditions. One of the most striking moments in the experiment video comes when one subject asks the experimenter if he'll assume full responsibility for the results of the experiment. (Of course, the experimenter says "yes," and the subject continues). The application of, say, "On the Jewish Question" is obvious--if people view public relationships as governed by legal (as opposed to human, or ethical) obligations, a malicious power need only assuage their legal fears before it can command them to act unethically.
  • Capitalist - Saying that money's not the issue doesn't make money not the issue. Workers don't constantly exchange money with their employers, either, but they enter into a symbolic relationship ultimately underpinned by money. Payment for man-hours provides the employer with a store of capital used to extract work. The Milgram fliers spell it out: they will "pay $4.50 for an hour of your time." Members of a capitalist society get used to thinking about the components of this exchange as inevitable, and the forces of reciprocity bind tightly. Think about it this way: if the participants had fully internalized the implications of the experimenter's claim that "money's not the issue; you've already received your check," many of them would have just walked out the door.
  • Scientific - I see two relevant types of scientific authority operating in the experiment video. First, the general appeal to the accumulation of scientific knowledge. Based on participant responses to the experimenter, this seems less important than other factors. Second, the scientific claim to knowledge of the equipment. Subjects extrapolate their technical ignorance about electricity to a general ignorance of the apparatus; they can then comfortably leave power (figurative and literal) in the hands of the experimenter. This supplements the institutional authority insulating, e.g., Nazi subordinates; subjects here can claim not only that they did not have control over what they were doing, but that they did not have true knowledge of what they were doing.

Note that the authoritative relationship operates bidirectionally; just as much as the experimenter exerts downward pressure on the subject to obey commands, the subject actively defers power upwards to avoid responsibility. In a situation without experimental controls, it's easy to see how these pressures could flow easily through (and perhaps even be amplified by) most any bureaucracy.

In particular, we should all think about the ways in which current society may have modified the above relationships to authority. I'd call the changes ambivalent (not categorically awful, so this is optimism for me). No doubt I'll talk about my opinions on those changes in later posts.

Question for comment: how would Milgram have designed this experiment today?

Genocide

My first blog post... and most recent salvo in my attempt to explain why I am not happy with "genocide" as defined by "Genocide" (Gov 1235).

- Ideas are materially embodied. Ideas in the mind also "exist" in the brain, even if outside observers cannot directly perceive them. Further, certain material conditions (genetic or environmental) contribute causally to the formation of certain ideas.

- Certain ideological approaches further reify these material ideas by assigning them permanence; in particular, by labeling a group of persons as irredeemably holding a certain idea. These designations can be considered more damning--that is, the bearer can be considered more essentially evil--than can membership in a biological group. If genocide differs from the Inquisition (and is worse than the Inquisition) because biological membership was considered harder to evade than religious belief, violence based on the type of ideological differentiation I am discussing resembles genocide more than it does the Inquisition.

- In particular, a fundamental assumption of the "war on terrorism" is the existence of the "terrorist" individual. Who are the terrorists against whom war has been declared? The prototypical terrorist in the popular imagination is the (nine-eleven) suicide bomber. The war on suicide bombers cannot be pursued retroactively, but being a terrorist is punishable by death. Those associated with the terrorist ideology are therefore to be considered terrorists for the purposes of the war; and this is not an ideology which one can renounce to avoid death.

- The war on terror thereby attempts to annihilate a large group of people, "terrorists," essentially sharing a uniquely and unabsolvably terroristic form of evil. The only way to eliminate the bad action, terrorism, is to eliminate the bad people, terrorists. How is this not genocide? The argument that this type of violence does not make a sufficient appeal to the biology of the victim misses the point, since it is applied with a framework as inflexible and methodologically questionable.

- If the term "genocide" must exclude this particular type of ideological violence, it ought to be replaced with new concepts which group types of violence based on broader goals. To say that genocide out to remain distinct (and thus in practice distinct as a matter of academic and legal concern) is to exhibit serious bias in favor of societies who practice large-scale violence using justifications other than one highly contingent biologistic grouping based on a particular concept of "race."