Non-Human Value (Part II)

I just noticed that I didn’t click “read on” for the rest of Larval Subject’s post. He writes:

If ontologically we cannot presuppose the formal identity of agents across diversity– indeed, if we cannot even presuppose our own identity by virtue of the fact that we become new agencies when we enter into new relations –rule-based ethical systems are out the window. Or perhaps, less dramatically, rules, criteria of judgment, are effects or results, not grounds. Yet if the domain of the ethical is not the domain of rules that would allow us to evaluate particular circumstances according to universal rules, then what is it? Perhaps, rather than judgment, the domain of the ethico-politico field is the domain not of judgment, but of problematizations. In other words, it would be the domain wherein problems of the coordination of networks or assemblages are formed. What we previously referred to as norms or rules would instead become attractors, tendencies, paths towards actualization of collective-bodies (groups, assemblages, or ecologies, all of which are objects at higher orders of scale and complexity).

I was going to write that in fact the speculative realism class, by doing Latour for example, is already doing work on value and normative theory. Another way to look at what Larval is talking about is what is eventually going to upend all notions of responsibility and duties, but just hasn’t taken, since philosophers are about three hundreds years behind the brain materialists. Here I’ll name Derrida here and just say he begins crucial work by undercutting the human/animal distinction (and therefore human/machine difference) as between “responding” and “reacting” in The Animal that Therefore I am. For an era of certain philosophers, the main complaint they had about the social sciences (see Arendt’s On Violence essay) is that they treated human beings as objects. We see that even more now. But we can’t wish this away, just as we wouldn’t wish away the knowledge that a certain cranial defect in a defendant mitigates his/her responsibility for a crime. The question has long been, though, how far we go: do synapses firing off not offer a mitigation? Why not the fact that “choice” is but an a posteriori fiction moments are the brain has fired off its commands? We’ll leave aside the whole question of freedom here, but at the least sometimes the more just result is not treating humans as objects too much, but too little, since see the human being as a reactive body (in this case of a “defect” of some sort) would lead us to acknowledge our problematic conceptions of blameworthiness.

Put otherwise, our jurisprudence, it’s not original to say, is founded on a woefully out of date conception of free will. But in recent years you have this conception sutured to base materialist conceptions of the brain, thus leading to judgments based upon taking 6th century conceptions of free will and marrying them to 21st century science. (Oh, it’s not a defect, then it’s perfectly free will—an either/or.) I’m sorry if, by the way, I’m not properly problematizing my language along the way: if it helps, just put rabbit ears around every word you want to contest.

The point is that this calls for more, not less, conceptions of ethics and objects. A lot is jumbled in here above, but to be clear, I think Larval is on to the right question: not whether or not ethics are simply human, but why we ever circumscribed them to the so-called human in the first place.

About these ads

One comment on “Non-Human Value (Part II)

  1. [...] excellent discussion in comments (read them), Peter made some short comments on his blog (here and here), and then both Paul and Adrian followed up on stuff they had said in comments in full length [...]

Comments are closed.