The other day I was listening to an episode of Revisionist History podcast by Malcolm Gladwell about 16th Century maritime insurance and baseball doping. The connection he drew was pretty tenuous. But it was interesting.
Essentially Gladwell explores the method of Casuistry for solving ethical dilemmas for new problems. Casuistry is a process of reasoning created by the Jesuits. A very brief synopsis of this method would be that one examines this novel problem not by applying general moral principles, but by establishing a ‘standard case’ (the word casuistry derives from the Latin noun casus meaning “case” or “occurrence”), which has a clear moral outcome. One then creates a rigorous taxonomy to define where the similarities lie and where the points of difference are. Essentially it is comparative analysis for ethics. It’s also used as a kind of philosophical slur to imply quibbling and equivocation in the face of tough moral decisions, but this doesn’t seem to phase Malcolm. He spends the episode applying the casuist method to a famous baseball doping scandal looking at how one evaluates modern growth hormone technologies in comparison to traditional surgical techniques. In the process, he demonstrates what it is to equivocate in an applied and compellingly practical manner. More specifically, he demonstrates how the casuist method can be used to inform on the morality of new problems.
Now, I’m not about to pretend I really know anything about moral philosophy, and I definitely don’t know anything about Jesuits, but listening to the episode made me reflect on the current state of conversations relating to the risk of new technologies manifesting unintended negative consequences. We are facing a whole load of NEW and juicy problems. Problems like how to effectively moderate the line between freedom of speech and hate speech or how to deal with deepfakes. We have never encountered these problems in these ways before. So far, we have pretty much produced a stream of endlessly similar yet slightly rivalrous sets of principles which are then pretty unilaterally criticised for being too general and therefore not very useful.
Gladwell makes the solid point that when it comes to a new problem, you can’t solve it by speaking to a principle. Principles don’t help because they are the product of past experience. In a new situation, you’re in uncharted territory and therefore you must proceed on a case by case basis. A general rule applies generally and the more you descend into the particulars, it is no longer a general rule.
Take, for example, the problem of virtual reality’s psychological impact. Studies indicate that experiencing a VR simulation in an avatar which is of a different race to ones own can have a dramatic impact on the users racial bias, as in they were less bias to other races. Of course immersing your two primary cognitive senses entirely obviously has a profound neurological experience! This is why VR can be so effective as a treatment for PTSD and potentially have many other incredibly positive uses, but also could be preset a risk for misuse, manipulation and abuse. One of the largest commercial audiences for VR technologies are children and adolescents playing video games whose brains are highly plastic and responsive. Having a technology which is potentially this emotionally powerful is a new problem but VR is like any tool- it can be used well or not so well. It’s not about banning the tool, it’s about understanding the risk, uses, and impacts in an applied manner.
So whilst I am not saying there is no value in the principles work, we don’t need any more principles, we need to get our hands dirty. We need to foreground the critical method for understanding and evaluating the development and use of new technologies. Unilateral notions of good and bad won’t cut it. Finding and developing ways that we can consistently and transparently understand these new problems is of vital importance. There are hundreds of years of moral philosophy and critical methods for understanding the world, so let’s use that knowledge.