Friday, April 28, 2017

The Ethics of Crash Optimisation Algorithms




Patrick Lin started it. In an article entitled ‘The Ethics of Autonomous Cars’ (published in The Atlantic in 2013), he considered the principles that self-driving cars should follow when they encountered tricky moral dilemmas on the road. We all encounter these situations from time to time. Something unexpected happens and you have to make a split second decision. A pedestrian steps onto the road and you don’t see him until the last minute: do you slam on the brakes or swerve to avoid? Lin made the obvious point that no matter how safe they were, self-driving cars would encounter situations like this, and so engineers would have to design ‘crash-optimisation’ algorithms that the cars would use to make those split second decisions.

In a later article Lin explained the problem by using a variation on the famous ‘trolley problem’ thought experiment. The classic trolley problem asks you to imagine a trolley car hurtling out of control down a railroad track. If it continues on its present course, it will collide with and kill five people. You can, however, divert it onto a sidetrack. If you do so, it will kill only one person. What should you do? Ethicists have debated the appropriate choice for the last forty years. Lin’s variation on the trolley problem worked like this:

Imagine in some distant future, your autonomous car encounters this terrible choice: it must either swerve left and strike an eight-year old girl, or swerve right and strike an 80-year old grandmother. Given the car’s velocity, either victim would surely be killed on impact. If you do not swerve, both victims will be struck and killed; so there is good reason to think that you ought to swerve one way or another. But what would be the ethically correct decision? If you were programming the self-driving car, how would you instruct it to behave if it ever encountered such a case, as rare as it may be? 
(Lin 2016, 69)

There is certainly value to thinking about problems of this sort. But some people worry that, in focusing on individualised moral dilemmas such as this, the framing of the ethical challenges facing the designers of self-driving cars is misleading. There are important differences between the moral choice confronting the designer of the crash optimisation system (whether it be programmed from the top-down with clearly prescribed rules or the bottom-up using some machine-learning system) and the choices faced by drivers in particular dilemmas. Recently, some papers have been written drawing attention to these differences. One of them is Hin-Yan Liu’s ’Structural Discrimination and Autonomous Vehicles’. I just interviewed Hin-Yan for my podcast about this and other aspects of his research, but I want to take this opportunity to examine the argument in that paper in more detail.


1. The Structural Discrimination Problem
Liu’s argument is that the design of crash optimisation algorithms could lead to structural discrimination (note: to be fair to him, Lin acknowledged the potential discriminatory impact in his 2016 paper).

Structural discrimination is a form of indirect discrimination. Direct discrimination arises where some individual or organisation intentionally disadvantages someone because they belong to a particular race, ethnic group, gender, class (etc). Once upon a time there were, allegedly, signs displayed outside pubs, hotels and places of employment in the UK saying ‘No blacks, No Irish’. The authenticity of these signs is disputed, but if they really existed, they would provide a clear example of direct discrimination. Indirect discrimination is different. It arises where some policy or practice has a seemingly unobjectionable express intent or purpose but nevertheless has a discriminatory impact. For example, a hairdressing salon that had a policy requiring all staff to show off their hair to customers might have discriminatory impact on (some) potential Muslim staff (I took this example from Citizen’s Advice UK).

Structural discrimination is a more generalised form of indirect discrimination whereby entire systems are set up are structured in such a way that they impose undue burdens on particular groups. How might this happen with crash optimisation algorithms? The basic argument works like this:


  • (1) If a particular rule or policy is determined with reference to factors that ignore potential forms of discrimination, and if that rule is followed in the majority of circumstances, it is likely to have an unintended structurally discriminatory impact.

  • (2) The crash optimisation algorithms followed by self-driving cars are (a) likely to be determined with reference to factors that ignore potential forms of discrimination and (b) are likely to be followed in the majority of circumstances.

  • (3) Therefore, crash optimisation algorithms are likely to have an unintended discriminatory impact.



The first premise should be relatively uncontroversial. It is making a probabilistic claim. It is saying that if so-and-so happens it is likely to have a discriminatory impact, not that it definitely will. The intuition here is that discrimination is a subtle thing. If we don’t try to anticipate it and prevent it from happening, we are likely to do things that have unintended discriminatory effects. Go back to the example of the hairdressing salon and the rule about uncovered hair. Presumably, no one designing that rule thought they were doing anything that might be discriminatory. They just wanted their staff to show off their hair so that customers would get a good impression. They didn’t consciously factor in potential forms of bias or discrimination. This is what created the potential for discrimination.

The first part of premise one is simply saying that what is true in the case of the hair salon is likely to be true more generally. Unless we consciously direct our attention to the possibility of discriminatory impact, it will be sheer luck whether we avoid it. That might not be too problematic if the rules we designed were limited in their application. For example, if the rule about uncovered hair for staff only applied to one particular hairdressing salon, then we have some problem but it would fall far short of structural discrimination. There would be discrimination in the particular salon, but that discrimination would not spread across society as whole. Muslim hairdressers would not be excluded from work at all salons. It is only when the rule is followed in the majority of cases that we get the conditions in which structural discrimination can breed.

This brings us to premise two. This is the critical one. Are there any reasons to accept it? Looking first to condition (a), there are indeed some reasons to believe that this will be the case. The reasons have to do with the ‘trolley problem’-style framing of the ethical challenges facing the designers of self-driving cars. That framing encourages us to think about the morally optimal choice in a particular case, not at a societal level. It encourages us to pick the least bad option, even if that option contravenes some widely-agreed moral principle. A consequentialist, for example, might resolve the granny vs. child dilemma in favour of the child based on the quantity of harm that will result. They might say that the child has more potentially good life years ahead of them (possibly justifying this by reference to the QALY standard) and hence it does more good to save the child (or, to put it another way, less harm to kill the granny). The problem with this reasoning is that in focusing purely on the quantity of harm we ignore factors that we ought to consider (such as the potential for ageism) if we wish to avoid a discriminatory impact. As Liu puts it:

[A]nother bling spot of trolley problem ethics…is that the calculus is conducted with seemingly featureless and identical “human units”, as the variable being emphasised is the quantity of harm rather than its character or nature.

We could try to address this problem by getting the designers of the algorithms to look more closely at the characteristics of the individuals that might be affected by the choices made by the cars, but this will then lead us to the second problem, namely the fact that whatever solution we hit upon is likely to be multiplied and shared across many self-driving cars, and that multiplication and sharing is likely to exacerbate any potentially discriminatory effect. Why is this? Well, presumably car manufacturers will standardise the optimisation algorithms they offer on their cars (not least because the software that actually drives the car is likely to be cloud-based and to adapt and learn based on the data collected from all cars). This will result in greater homogeneity in how cars respond to trolley-problem like dilemmas, which will in turn increase any potentially discriminatory effect. For example, if an algorithm does optimise by resolving the dilemma in favour of the child, we get a situation in which all cars using that algorithm favour children over grannies, and so an extra burden is imposed on grannies across society as a whole. They face a higher risk of being killed by a self-driving car.

There are some subtleties to this argument that are worth exploring. You could reject it by arguing that there will still presumably be some diversity in how car manufacturers optimise their algorithms. So, for example, perhaps all BMWs will be consequentialist in their approach whereas all Audis will be deontological. This is likely to result in a degree of diversity but perhaps much less diversity than we currently have. This is what I think is most interesting about Liu’s argument. In a sense, we are all running crash-optimisation algorithms in our heads right now. We use these algorithms to resolve the moral dilemmas we face while driving. But as various experiments have revealed, the algorithms humans use are plural and messy. Most people have intuitions that make them lean in favour of consequentialist solutions in some cases and deontological solutions in others. Thus the moral choices made at an individual level can shift and change across different contexts and moods. This presumably creates great diversity at a societal level. The differences across the different car manufacturers is likely to be more limited.

This is, admittedly, speculative. We don’t know whether the diversity we have right now is so great that it avoids any pronounced structural discrimination in the resolution of moral dilemmas. But this is what is interesting about Liu’s argument: It make an assumption about the current state of affairs (namely that there is great diversity in the resolution of moral dilemmas) that might be true but is difficult to verify until we enter a new state of affairs (one in which self-driving cars dominate the roads) and see whether there is a greater discriminatory impact or not. Right now, we are at a moment of uncertainty.

Of course, there might be technical solutions to the structural discrimination problem. Perhaps, for instance, crash optimisation algorithms could be designed with some element of randomisation, i.e. they randomly flip back-and-forth between different moral rules. This might prevent structural discrimination from arising. It might seem odd to advocate moral randomisation as a solution to the problem of structural discrimination, but perhaps a degree of randomisation is one of the benefits of the world in which we currently live.


2. The Immunity Device Thought Experiment
There is another nice feature to Liu’s paper. After setting out the structural discrimination problem, he introduces a fascinating thought experiment. And unlike many philosophical thought experiments, this is one that might make the transition from thought to reality.

At the core of the crash optimisation dilemma is a simple question: how do we allocate risk in society? In this instance the risk of dying in a car accident. We face many similar risk allocation decisions already. Complex systems of insurance and finance are set up with the explicit goal of spreading and reallocating these risks. We often allow people to purchase additional protection from risk through increased insurance premiums, and we sometimes allocate/gift people extra protections (e.g. certain politicians or leaders). Might we end up doing the same thing when it comes to the risk of being struck by a self-driving car? Liu asks us to imagine the following:

Immunity Device Thought Experiment:‘It would not be implausible or unreasonable for the manufacturers of autonomous vehicles to issue what I would call here an “immunity device”: the bearer of such a device would become immune to collisions with autonomous vehicles. With the ubiquity of smart personal communication devices, it would not be difficult to develop a transmitting device to this end which signals the identity of its owner. Such an amulet would protect its owner in situations where an autonomous vehicle finds itself careening towards her, and would have the effect of deflecting the care away from that individual and thereby divert the car to engage in a new trolley problem style dilemma elsewhere.' 
(Liu 2016, 169)

The thought experiment raises a few important and interesting questions. First, is such a device technically feasible? Second, should we allow for the creation of such a device? And third, if we did, how should we allocate the immunity it provides?

On the first question, I agree with what Liu says. It seems like we have the underlying technological infrastructure that could facilitate the creation of such a device. It would be much like any other smart device and would simply have to be in communication with the car. There may be technical challenges but they would not be insurmountable. There is a practical problem if everybody managed to get their hands on an immunity device: that would, after all, defeat the purpose. But Liu suggests a work around to this: have a points-based (trump card) rating system attached to the device. So people don’t get perfect immunity; they get bumped up and down a ranking order. This changes the nature of the allocation question. It’s no longer who should get such a device but, rather, how the points should be allocated.

On the second question, I have mixed views. I feel very uncomfortable with the idea, but I can’t quite pin down my concern. I can see some arguments in its favour. We do, after all, have broadly analogous systems nowadays whereby people get additional protection through systems of social insurance. Nevertheless, there are some important disanalogies between what Liu imagines and other forms of insurance. In the case of, say, health insurance, we generally allow richer people to buy additional protection in the form of higher premiums. This can have negative redistributive consequences, but the gain to the rich person does not necessarily come at the expense of the poorer person. Indeed, in a very real sense, the rich person’s higher premium might be subsidising the healthcare of the poorer person. Furthermore, the protection that the rich person buys may never be used: it’s there as peace of mind. In the case of the immunity device, it seems like the rich person buying the device (or the points) would necessarily be doing so at the expense of someone else. After all, the device provides protection in the event of a self-driving car finding itself in a dilemma. The dilemma is such that the car has to strike someone. If you are buying immunity in such a scenario it means you are necessarily paying for the car to be diverted so that it strikes someone else. This might provide the basis for an objection to the idea itself: this is something that we possibly should not allow to exist. The problem with this objection is that it effectively applies the doctrine of double effect to this scenario, which is not something I am not comfortable with. Also, even if we did ban such devices, we would still have to decide how to allocate the risk burden: at some stage you would have to make a choice as to who should bear the risk burden (unless you adopt the randomisation solution).

This brings us to the last question. If we did allow such a device to be created, how would we allocate the protection it provides. The market-based solution seems undesirable, for the reasons just stated. Liu considers the possibility of allocating points as a system of social reward and punishment. So, for example, if you commit a crime you could be punished by shouldering an increased risk burden (by being pushed down the ranking system). That seems prima facie more acceptable than allocating the immunity through the market. This is for two reasons. First, we are generally comfortable with the idea of punishment (though there are those who criticise it). Second, according to most definitions, punishment involves the intentional harming of another. So the kinds of concerns I raised in the previous paragraph would not apply to allocation-via-punishment: if punishment is justified at all then it seems like it would justify the intentional imposition of a risk burden on another. That said, there are reasons to think that directly harming someone through imprisonment or fine is more morally acceptable than increasing the likelihood of their being injured/killed in a car accident. After all, if you object to corporal or capital punishment you may have reason to object to increasing the likelihood of bodily injury or death.


Okay, that brings us to the end of this post. I want to conclude by recommending Liu's paper. We discuss the ideas in it in more detail in the podcast we recorded. It should be available in a couple of weeks. Also, I should emphasise that Liu introduces the Immunity Device as a thought experiment. He is definitely not advocating its creation. He just thinks it helps us to think through some of the tricky ethical questions raised by the introduction of self-driving cars.

Sunday, April 23, 2017

Episode #21 - Mark Coeckelbergh on Robots and the Tragedy of Automation


Mark-portrait-250x250

In this episode I talk to Mark Coeckelbergh. Mark is a Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and President of the Society for Philosophy and Technology. He also has an affiliation as Professor of Technology and Social Responsibility at the Centre for Computing and Social Responsibility, De Montfort University, UK. We talk about robots and philosophy (robophilosophy), focusing on two topics in particular. First, the rise of the carebots and the mechanisation of society, and second, Hegel's master-slave dialectic and its application to our relationship with technology.


You can download the episode here. You can also listen below or subscribe on Stitcher and iTunes (via RSS) or here.


Show Notes

  • 0:00 - Introduction
  • 2:00 - What is a robot?
  • 3:30 - What is robophilosophy? Why is it important?
  • 4:45 - The phenomenological approach to roboethics
  • 6:48 - What are carebots? Why do people advocate their use?
  • 8:40 - Ethical objections to the use of carebots
  • 11:20 - Could a robot ever care for us?
  • 13:25 - Carebots and the Problem of Emotional Deception
  • 18:16 - Robots, modernity and the mechanisation of society
  • 21:50 - The Master-Slave Dialectic in Human-Robot Relationships
  • 25:17 - Robots and our increasing alienation from reality
  • 30:40 - Technology and the automation of human beings
 

Relevant Links

Tuesday, April 18, 2017

Heersmink's Taxonomy of Cognitive Artifacts


Polynesian Sailing Map


Polynesian sailors developed elaborate techniques for long-distance sea travel long before their European counterparts. They mapped out the elevation of the stars; they followed the paths of migrating birds; they observed sea swells and tidal patterns. The techniques were often passed down from generation to generation through the medium of song. They are still taught to this day (in some locations). In 1976, there was a famous proof of their effectiveness when Mau Piailug, a practitioner of the techniques, steered a traditional sailing canoe nearly 3,000 miles from Hawaii to Tahiti without relying on more modern methods of navigation.

These Polynesian sailing techniques provide a perfect real-world illustration of distributed cognition theory. According to this theory, cognition is not something that takes place purely in the head. When humans want to perform cognitive tasks, they don’t simply represent and manipulate the cognition-relevant information in their brains, they also co-opt features of their environment to assist them in the performance of cognitive tasks. In the case of the Polynesian sailors, it was the migrational patterns of birds, the movements of the sea and the elevation of the stars that assisted the performance. It was also the created objects and cultural products (e.g. songs) that they used to help to offload the cognitive burden and transmit the relevant knowledge down through the generations. In this manner, the performance of the cognitive task of navigation became distributed between the individual sailor and the wider environment.

Generally speaking, there are three features of the external environment that can assist in the performance of a cognitive task:

Cognitive Artifacts: Intentionally designed objects that are used in the performance of the task, e.g. a map, a calendar, an abacus, or a textbook.

Naturefacts: Natural objects, events or states of affairs that get co-opted into the performance of a cognitive tasks, e.g. the paths of migrating birds and the elevation of the stars.

Other Cognitive Agents: Other humans (or, possibly, robots and AI) that can perform cognitive tasks in collaboration/cooperation with one another.

I think it is important to understand how all three of these cognitive-assisters function and to appreciate some of the qualitative differences between them. One thing that distributed cognition theory enables you to do is to appreciate the complex ecology of cognition. Because cognition is spread out across the agent and its environment, the agent becomes structurally coupled to that environment. If you tamper with or alter one part of the external cognitive ecology it can have knock-on effects elsewhere within the system, changing the kinds of cognitive task that need to be performed, and altering the costs/benefits associated with different styles of cognition (I discussed this, to some extent, in a previous post). Understanding how the different cognitive assisters function provides insight into these effects.

In the remainder of this post, I want to take a first step towards understanding the complexity of our cognitive ecology by taking a look at Richard Heersmink’s proposed taxonomy of cognitive artifacts. This taxonomy gives us some insight into one of the three relevant features of our cognitive ecology (cognitive artifacts) and enables us to appreciate how this feature works and the different possible forms it can take.

The taxonomy itself is fairly simple to represent in graphical form. It divides all cognitive artifacts into two major families: (i) representational and (ii) ecological. It then breaks these major families down into a number of sub-types. These sub-types are labelled using a somewhat esoteric conceptual vocabulary. The labels make sense once you have mastered the vocabulary. The remainder of this post is dedicated to explaining how it all works.





1. Representational Cognitive Artifacts
Cognition is an informational activity. We perform cognitive tasks by acquiring, manipulating, organising and communicating information. Consequently, cognitive artifacts are able to assist in the performance of cognitive tasks precisely because they have certain informational properties. As Heersmink puts it, the functional properties of these artifacts supervene on their informational properties. One of the most obvious things a cognitive artifact can do is represent information in different forms.

’Representation’ is a somewhat subtle concept. Heersmink adopts CS Peirce’s classic analysis. This holds that representation is a triadic relation between an object, sign and interpreter. The object is the world that the sign is taken to represent, the sign is that which represents the world, and the interpreter is the one who determines the relation between the sign and the object. To use a simple example, suppose there is a portrait of you hanging on the wall. The portrait is the sign; it represents the object (in this case you); and you are the interpreter. The key thing about the sign is that it stands in for something else, namely the represented object. Signs can represent objects in different ways. Some forms of representation are straightforward: the sign simply looks like the object. Other forms of representation are more abstract.

Heersmink argues that there are three main forms of representation and, as a result, three main types of representational cognitive artifact. The first form of representation is iconic. An iconic representation is one that is isomorphic with or highly similar to the object it is representing. The classic example of an iconic cognitive artifact is a map. The map provides a scaled down picture of the world. The visual imagery on the map is supposed to stand in a direct, one-to-one relation with the features in the real world. A lake is depicted as an blue blob; a forest is depicted as a mass of small green trees, a mountain range is depicted as a series of humps, coloured in different ways to represent their different heights.

The second form of representation is indexical. An indexical representation is one that is causally related to the object it is representing. The classic example of an indexical cognitive artifact would be a thermometer. The liquid within the thermometer expands when it is heated and contracts when it is cooled. This results in a change in the temperature reading on the temperature gauge. This means there is a direct causal relationship between the information represented on the temperature gauge and the actual temperature in the real world.

The third form of representation is symbolic. A symbolic representation is one that is neither iconic nor indexical. There is no discernible relationship between the sign and the object. The form that the sign takes is arbitrary and people simply agree (by social convention) that it represents a particular object or set of objects. Represented language is the classic example of a symbolic cognitive artifact. The shapes of letters and the order in which they are presented bears no direct causal or isomorphic relationship to the objects they describe or name (pictographic or ideographic languages are different). The word ‘cat’, for example, bears no physical similarity to an actual cat. There is nothing about those letters that would tell you that they represented a cat. You simply have to learn the conventions to understand the representations.

The different forms of representation may be combined in any one cognitive artifact. For example, although maps are primarily iconic in nature, they often include symbolic elements such as place-names or numbers representing elevation or distance.


2. Ecological Cognitive Artifacts

The other family of cognitive artifacts are ecological in nature. This is a more difficult concept to explain. The gist of the idea is that some artifacts don’t merely provide representations of cognition-relevant information; rather, they provide actual forums in which information can be stored and manipulated. The favourite example of this — one originally posed by the distributed cognition pioneer David Kirsh — is the game of Tetris. For those who are not familiar, Tetris is a game in which you must manipulate differently shaped ‘bricks’ (technically known as ‘zoids’) into sockets or slots at the bottom of the game screen so that they form a continuous line of zoids. Although you could, in theory, play the game by mentally rotating the zoids in your head, and then deciding how to move them on the game screen, this is not the most effective way to play the game. The most effective way to play the game is simply to rotate the shapes on the screen and see how they will best fit into the wall forming at the bottom of the screen. In this way, the game creates an environment in which the cognition-relevant manipulation of information is performed directly. The artifact is thus its own cognitive ecology.

Heersmink argues that there are two main types of ecological cognitive artifact. The first is the spatial ecological artifact. This is any artifact that stores information in its spatial structure. The idea behind it is that we encode cognition-relevant information into our social spaces, thereby obviating the need to store that information in our heads. A simple example would be the way in which we organise clothes into piles in order to keep track of which clothes have been washed, which need to be washed, which have been dried, and which need to be ironed. The piles, and their distribution across physical space, stores the cognition-relevant information. Heersmink points out that the spaces in which we encode information need not be physical/real-world spaces. They can also be virtual, e.g. the virtual ‘desktop’ on your computer or phonescreen.

The other kind of ecological cognitive artifact is the structural artifact. I don’t know if this is the best name for it, but the idea is that some artifacts don’t simply encode information into physical or virtual space; they also provide forums in which that information can be manipulated, reorganised and computed. The Tetris gamescreen is an example: it provides a virtual space in which zoids can be rearranged and rotated. Another example would be scrabble tiles: constantly reorganising the tiles into different pairs or triplets makes it easier to spot words. The humble pen and paper can also, arguably, be used to create structures in which information can be manipulated and reorganised (e.g. writing out the available letters and spaces when trying to solve a crossword clue).


3. Conclusion
This then is Heersmink’s taxonomy of cognitive artifacts. One thing that is noticeable about it (and this is a feature, not a bug) is that it focuses on the properties of the artifacts themselves, not on human uses. It is, thus, an artifact-centred taxonomy not an anthropomorphic one. Also the taxonomy does not divide the world of cognitive artifacts into a set of jointly exhaustive and mutually exclusive categories. As is clear from the descriptions, particular artifacts can sit within several of the categories at one time.

Nevertheless, I think the taxonomy is a useful one. It sheds light on the different ways in which artifacts can figure in our cognitive tasks, it makes us more sensitive to the rich panoply of cognitive artifacts we encounter in our everyday lives, and it can shed light on the propensity of these artifacts to enhance our cognitive performance. For example, symbolic cognitive artifacts clearly have a higher cognitive burden associated with them. The user must learn the conventions that determine the meaning of the representations before they can effectively use the artifact. At the same time, the symbolic representations probably allow for more complex and abstract cognitive operations to be performed. If we relied purely on iconic forms of representation we would probably never have generated the rich set of concepts and theories that litter our cognitive landscapes.

Saturday, April 15, 2017

The Art of Lecturing: Four Tips




The lecture is much maligned. An ancient art form, practiced for centuries by university lecturers, writers and public figures, it is now widely-regarded as an inferior mode of education. Lectures are one-sided, information dumps. They are more about the ego of the lecturer than the experience of the audience. They are often dull, boring, lacking in dynamism. They need to be replaced by ‘flipped’ classrooms, small-group activities, and student-led peer instruction.

And yet lectures are persistent. In an era of mass higher education, there is little other choice. An academic with teaching duties simply must learn to lecture to large groups of (apathetic) students. The much-celebrated paradigm of the Oxbridge-style tutorial, whatever its virtues may be, is simply too-costly to realise on a mass scale. So how can we do a better job lecturing? How can we turn the lecture into a useful educational tool?

I claim no special insight. I have been lecturing for years and I’m not sure I am any good at it. There are times when I think it goes well. I feel as if I got across the point I wanted to get across. I feel as if the students understood and engaged with what I was trying to say. Many times the evaluations I receive from them are encouraging. But these evaluations are global not local in nature: they assess the course as a whole, not particular lectures. Furthermore, I’m not sure that one-time, snapshot evaluations of this nature are all that useful. Not only is there a significant non-response rate, there is also the fact that the value of the particular lecture may take time to materialise. When I think back to my own college days, I remember few, if any, of the lectures I attended. It’s the odd one or two that have stuck in mind and proven useful. It would have been impossible for me to know this at the time.

So the sad reality is that most of the time we lecture in the dark. We try our best (or not) and never know for sure whether we are doing an effective job. The only measures we have are transient and immediate: how did I (qua lecturer) act in the moment? Was I fluent in my exposition? Did the class engage with what I was saying? Did they ask questions? Was their curiosity piqued? Did any of the students come up to me afterwards to ask more questions about the topic? Did I create a positive atmosphere in the class?

Despite this somewhat pessimistic perspective, I think there are things that a lecturer can do to improve the lecturing experience, both for themselves and for their students. To this end, I created a poster with four main tips on how to lecture more effectively. I created this some time ago, after reading James Lang’s useful book On Course: A Week-by-Week Guide to Your First Semester of College Teaching, and by reflecting on my own classroom experiences. You can view the poster below; I elaborate on its contents in what follows.




1. Cultivate the Right Attitude
The first thing to do in order to improve the lecturing experience is simply to improve one’s own attitude towards it. If you read books on pedagogy or attend classes on teaching in higher education, you’ll come across a lot of anti-lecture writings. And if you do enough lectures yourself, you can end up feeling pretty jaded and cynical. The main critique of the lecture as a pedagogical tool is that it is antiquated. It may have had value at a time when students didn’t have easy access to the information being presented by the lecturer, but in today’s information rich society it makes no sense. Students can acquire all the information that is presented to them in the lecture through their own efforts — all the more so if you are providing them with class notes and lecture slides. So why bother?

The answer is that the lecture is still valuable and it’s important to appreciate its value before you start lecturing. For starters, I would argue that in today’s information-rich society, the lecture possibly has more value than ever before. The lecture is not just an information-dump; it is a lived experience. Just because student’s have easy access to the information contained within your lecture doesn’t mean they will actually access it. Most probably won’t, not unless they are cramming for their final exams. Not only is today’s society information-rich; it is also distraction-rich. When students leave the classroom they will have to exert exceptional willpower in order to avoid those distractions and engage with the relevant information. Thus, there is some value to the lecture as a ‘special’ lived experience when students are forced to confront the information and ideas relevant to their educational programme. They can, of course, supplement this with own reading and learning, but students who don’t avail of the ‘special time’ of the lecture face an additional hurdle.

On top of this, there are things that a lecture can do that cannot be easily replicated by textbooks and lecture notes and the like. First, they can effectively summarise the most up-to-date research and synthesise complex bodies of information. This is particularly true if you are lecturing on your research interests and you keep abreast of the latest research in a way that textbooks and other materials do not. Lectures can also translate complex ideas to particular audiences. If you are lecturing to a group (in person) you can get a good sense of whether they ‘grok’ the material being presented by constantly checking-in. This allows you to adjust the pace of presentation or the style of explanation to a manner that best suits the group. Another value of lectures is that they allow the lecturer to present themselves as an intellectual model to their students — to inspire them to engage with the world of ideas.

Finally, if all else fails, lectures have value for the lecturer because they learn more about their field of study through the process of preparing for lectures. It is an oft-repeated truism that you don’t really know something until you have to explain it to someone else. Lectures give you the opportunity to do that several times a week.


2. Organise the Material
The second thing to do is to organise the material effectively. It’s an obvious point, but if the lecture consists largely in you presenting information to students, it is important that the information is presented in some comprehensible and compelling format. There are many ways to do this effectively, but three general principles are worth keeping in mind:


  • (i) Less is more: Lecturers have a tendency to overstuff their lectures with material, often because they have done a lot of reading on the topic and don’t want it to go to waste. What seems manageable to the lecturer is often too much for the students. I tend to think 3-5 main ideas per fifty-minute lecture is a good target.

  • (ii) Coherency: The lecture should have some coherent structure. It should not be just one idea after another. Organising the lecture around one key argument, story, or research study is often an effective way to achieve coherency. I lecture in law or legal theory so I tend to organise lectures around legal rules and the exceptions to them, or policy arguments and the critiques of them. I’m not sure this is always effective. I think it might be better to organise lectures around stories. Fortunately, law is an abundant source of stories: every case that comes before the court is a story about someone’s life and how it was affected by a legal rule. I’m starting to experiment with structuring my lectures around the more compelling of these stories.

  • (iii) Variation: It’s always worth remembering that attention spans are short so you should build some variation into the lecture. Occasionally pausing for question breaks or group activities are good way to break up the monotony.



3. Manage the Performance
The third thing to do is to manage the physical performance of lecturing. This might be the most difficult part of lecturing when you are starting out. I know when I first started lecturing I never thought of lecturing as a performance art. But over time I have come to learn that it is. Being an effective lecturer is just as much about mastering the physical space of the lecture theatre as it is about knowing the material. I tended to focus on the latter when I was a beginner, now I tend to focus more on the former.

The general things to keep in mind here are (i) your lecturing persona and (ii) the way in which you land your energy within the classroom.

When you are lecturing you are, to at least some extent, playing a character. Who you are in the lecture theatre is different from who you are in the rest of your life. I know some lecturers craft an intimidating persona, eager to impress their students with their impressive learning and being dismissive of what they perceive to be silly questions. Such personas tend to stem from insecurity. At the same time, I know other lecturers who try to be incredibly friendly and open in their classroom personas, while oftentimes being more insular and closed in the rest of their worklife. I try to land somewhere in between these extremes with my lecturing persona. I don’t like being overly friendly, but I don’t like being intimidating either.

’Landing your energy’ refers to the way in which you direct your attention and gaze within the classroom. I remember one lecturer I had who used to land his energy on a clock at the back of the lecture theatre. At the start of every lecture he would open up his powerpoint presentation, gaze at the clock on the back wall of the lecture theatre, tilt his head to one side, and then start talking. Never once did he look at the expressions on his students faces. Suffice to say, this was not a very effective way to manage the physical space within the classroom. It wasn’t engaging. It didn’t make students feel like they were important to the performance.

A good resource for managing the physical aspects of lecturing is this video from the Derek Bok Center on ‘The Act of Teaching’.


4. Engage the Students
The final thing to do is to make sure that lectures are not purely one-way. This is the biggest criticism of lectures and it can be avoided by building-in opportunities for genuine student engagement during the 50 or so minutes you have in the typical lecture. There are some standard methods for doing this. The most obvious is to encourage students to take notes. This might seem incredibly old-fashioned, but I always emphasise it to students in my courses. The note-taking process forces students to cognitively engage with what is being said and to translate it into a language that makes sense to them. To some extent, it doesn’t even matter if the students use the notes for revision purposes.

Other things you can do include: building discussion moments into the class when you pause to ask questions, get students to think about them, and then ask follow up questions; using in-class demonstrations of key ideas and concepts; and using the peer-instruction model (pioneered by Erik Mazula) where you pose conceptual tests during the lecture and get students to answer in peer groups. Of these, my favourite are the first two. I like to pause during lectures to get students to think about some question for a minute; get them to discuss it with the person sitting next to them for another minute; and then to develop this into a classroom discussion. I find this to be the most effective technique for stimulating classroom discussion — much more so than simply posing a question to the group as a whole. Demonstrations can also work well, but only for particular subjects or ideas. I use game theory in some of my classes and I find demonstrating how certain legal, political and commercial ‘games’ work, using volunteers from the class, is an effective way to facilitate student engagement.

Monday, April 10, 2017

Abortion and the People Seeds Thought Experiment




(Entry on the violinist thought experiment)

The most widely discussed argument against abortion focuses on the right to life. It starts from something like the following premise:


  • (1) If an entity X has a right to life, it is impermissible to terminate X’s existence.


This premise seems plausible but needs to be modified. It does deal with the clash of rights. There are certain cases in which rights conflict and need to be balanced and traded off against each other. The most obvious case in the one in which one person’s right to life conflicts with another person’s right to life. In those cases (typically referred to as ‘self defence’ cases) it may be permissible for one individual to terminate another individual’s existence. Abortion may occasionally be permitted on these grounds. For example, the foetus may pose a genuine threat to the life of the mother and so her right to life might be taken to trump the foetus’s right to life (assuming, for the sake of argument, that it has such a right).

The more difficult case is where the foetus poses no threat to the life of the mother. The question then becomes whether the mother’s right to control what happens to her body trumps the foetus’ right to life. Judith Jarvis Thomson’s famous article ‘A Defense of Abortion’ tries to argue the affirmative answer to this question. It does so through a series of fanciful and ingenious thought experiments. The most widely-discussed of those thought experiments is the violinist thought experiment, which supposedly shows that the right to control one’s body trumps the right to life in cases of pregnancy resulting from rape. I presented a lengthy analysis of that thought experiment in a recent post.

Less widely-discussed is Thomson’s ‘People Seeds’ thought experiment and it’s that thought experiment that I wish to discuss over the remainder of this post. I do so with some help from John Martin Fischer’s article ‘Abortion and Ownership’, as well as William Simulket’s article ‘Abortion, Property and Liberty’.


1. People Seeds and Contraceptive Failure
Here is Thomson’s original presentation of the ‘People Seeds’- thought experiment.

[S]uppose it were like this: people-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets or upholstery. You don’t want children, so you fix up your windows with fine mesh screens, the very best you can buy. As can happen, however, and on very, very rare occasions does happen, one of the screens is defective; and a seed drifts in and takes root. 
(Thomson 1971, 59)

Now ask yourself two questions about this thought experiment: (1) Do you have a right to remove the seed if it takes root? and (2) What is this scenario like?

In answer to the first question, Thomson suggests that the answer is ‘yes’. You have no duty to allow the people-seed to gestate on the floor of your house just because one happened to get through your meshed curtains. Your voluntary opening of the windows does not give an insurmountable right to the people-seeds. In answer to the second question, it is supposed to be like the case of pregnancy resulting from contraceptive failure. Arguing by analogy, Thomson’s claim is that the moral principle governing the ‘People-Seed’-case carries over to the case of pregnancy resulting from contraceptive failure. So just as the right to control what happens to one’s property trumps the people-seed’s right to life in the former, so too does the right to control what happen’s to one’s body trump the foetus’ right to life (assuming it has one) in the latter. I have tried to illustrate this reasoning in the diagram below.



This argument is significant, if it is right. Thomson’s violinist thought experiment could only establish the permissibility of abortion in cases of involuntary pregnancy (i.e. pregnancy resulting from rape). The ‘People-seeds’ thought experiment goes further and purports to establish the permissibility of abortion in cases of voluntary sexual intercourse involving contraceptive failure. Is the argument right?


2. Counter-Analogies to People-Seeds
I’m going to look at John Martin Fischer’s analysis of the ‘People-Seeds’-thought experiment. I’ll start with an important preliminary point. Whenever we develop and evaluate a thought experiment, we have to be careful to ensure that our intuitions about what is happening in the thought experiment are not being contaminated or affected by irrelevant variables.

Thomson’s stated goal in her article is to consider the permissibility of abortion if we take for granted that the foetus has a right to life. Obviously, this is a controversial assumption. Many people argue that the foetus does not have a right to life because the foetus is not a person (or other entity capable of having a right to life). Thomson is trying to set that controversy to the side. She is willing to accept that the foetus really does have a right to life. Consequently, it is important for her project that she uses thought experiments involving entities that clearly do have a right to life. The violinist thought experiment clearly succeeds in this regard. It involves a fully competent adult human being — an entity that uncontroversially has a right to life. It’s less clear whether the people-seeds thought experiment shares this quality. It could be that when people are imagining the scenario they don’t think of the people-seeds as entities possessing a right to life (perhaps they think of them as the equivalent to sperm cells getting lodged in your carpet - they will take a bit of time to become people). Consequently, their conclusion that there is nothing wrong with removing the people-seeds from the carpet might not be driven by intuitions regarding the trade off between the right to life and the right to control one’s property but rather by intuitions about the right to control one’s property simpliciter.

Fischer thinks there is some evidence for this interpretation of the thought experiment. If you run an alternative, but quite similar, thought experiment involving an entity that clearly does possess a right to life, the conclusion Thomson wishes to draw is much less compelling. Here’s one such thought experiment coming from the philosopher Kelly Sorensen:

Imagine you live in a high-rise apartment. The room is stuffy, and so you open a window to air it out. You don’t want anyone coming in…so you fix up your windows with metal bars, the very best you can buy. As can happen, though, the bars and/or their installation are defective, and the Spiderman actor [who is filming in the local area]…falls in, breaks his back in a special way, and cannot be moved, without ending his life, for nine months. Are you morally required to let him stay? 
(Fischer 2013, 291)

The suggestion from Fischer is that you might be under such an obligation. But if this is right, then it possibly provides a better analogy with the case of pregnancy resulting from contraceptive failure and a reason to think that the right to control one’s body does not trump the right to life.

Another point that Fischer makes is that your role in causing the entity in question to become dependent on you (your body or your property) might make a relevant difference to our moral beliefs. Thus, the fact that Thomson’s thought experiment asks us to suppose that the people-seeds are just out there already, floating around on the breeze, waiting to take up residency on somebody’s carpet, might be affecting our judgment. In this world, you are constantly in a defensive posture, trying to block the invasion of the people-seeds. If we changed the scenario so that you actually play some positive causal role in drawing them into your house/apartment we might reach a different conclusion. So here’s a slight variation on Thomson’s thought experiment:

Suppose that you can get some fresh air by simply opening the window (with the fine mesh screen), but still, you would get so much more if you were to use your fan, suitably placed and positioned so that it is sucking air from outside into the room. The only problem is that this sucks people-seeds into the room along with the fresh air. 
(Fischer 2013, 292)

The suggestion is that this is much closer to the case of pregnancy resulting from contraceptive failure. After all, voluntarily engaging in sexual intercourse (even with contraception) involves playing a positive causal role in drawing into your body the sperm cells that make pregnancy possible.

In sum, then, we have two counter-analogies to Thomson’s ‘People-Seeds’-thought experiment. The suggestion is that both of these thought experiments are closer to pregnancy resulting from contraceptive failure and so the moral principle that applies in both should carry over to that case. The right to control one’s body does not trump the right to life.




3. Analysis of the Counter-Analogies
There are two problems with these counter-analogies. The first is simply that they do not compare like with like. This is a problem with all thought experiments that are intended to provide analogies with pregnancy, including Thomson’s. Pregnancy is, arguably, a sui generis phenomenon: there are no good analogies with it, period. Consequently, it is very difficult to build a moral argument for (or against) abortion by simply constructing elaborate and highly artificial thought experiments that pump our intuitions about the right to life in various ways. Furthermore, even if you hold out some hope for the analogical strategy, there is something pretty obviously disanalogous about the two scenarios: all the thought experiments involve interferences with the right to property not with the right to control over one’s body. Perhaps one has a property right over one’s body. Even still, the degree of invasiveness and dependency involved in pregnancy is quite unlike someone taking up residency on your carpet.

Another problem with the thought experiments is the normative principles underlying them. The whole discussion about pregnancy and contraceptive failure is motivated by the belief that consent matters when it comes to determining the rights claims that others have over us. Pregnancy from rape is distinctive because it involves a lack of consent. One person impregnates another against their will. It seems intuitively plausible (irrespective of the ranking one has of different rights) to assume that duties cannot be easily imposed on someone without their consent. Pregnancy from contraceptive failure is different because (a) everyone knows that pregnancy is a possible (if not probable) result of sexual intercourse even when it takes place with contraceptive protection and (b) by consenting to the sexual intercourse it seems like you must be willing to run the risk of this possible result. Consequently, it doesn’t seem quite so far-fetched to suppose that you might be voluntarily incurring some duties by engaging in the activity.

This line of reasoning, as William Simulket sees it, is motivated by the following consent principle:

Consent principle: When an agent A freely engages in action X, A consents to all possible foreseeable consequences of X.

At first glance, this seems like a plausible principle and if it is correct it would seem to imply that A incurs certain obligations or duties with respect to X. But according to Simulket (and Thomson) this consent principle cannot possibly be correct because it entails absurd consequences. It entails that women are ‘on the hook’ (so to speak) for all the possible pregnancies that might befall them (irrespective of whether they consented to the sexual activity that led to the pregnancy) because rape is a possible foreseeable consequence of being alive and walking about in the world, and hence women who refuse to get hysterectomies must have consented to the possibility of pregnancy resulting from rape. Thomson put it like this in her original article:

…by the same token anyone can avoid a pregnancy due to rape by having a hysterectomy, or anyway by never leaving home without a (reliable!) army. 
(Thomson 1971, 59)

And Simulket explained the idea in his article as follows:

The circumstances that we face are, largely, outside of our control. But whether we have invasive surgery to remove our reproductive organs is, largely, within our control. It is uncontroversially true that any of us might be raped at some point in the future. Therefore, according to this argument, women who realize that rape is possible but who do not have a hysterectomy have consented to becoming pregnant from sexual assault. 
(Simulket 2015, 376)

Simulket also suggests, along similar lines, that the consent principle, if true, would entail that we all consent to all the possible foreseeable misfortunes that befall us because we could have avoided them by committing suicide. It is, of course, absurd to assume that if we wish to avoid responsibility for what happens to us we must get hysterectomies or commit suicide, hence the consent principle must be wrong.

I’m not sure what to make of this. I agree with Simulket and Thomson that the strong version of the consent principle — the one that holds that we are on the hook for all possible foreseeable consequences of what we do — must be wrong. But obviously some version of the consent principle must be correct (perhaps one that focuses on results that are reasonably foreseeable or probable). After all it is essential of our systems of contract law and legal responsibility that we incur duties through our voluntary activity.

If this is correct, then maybe Thomson’s thought experiments succeed in showing that the right to control one’s body trumps the right to life of the foetus (assuming it has one) in cases of pregnancy resulting from contraceptive failure, but it does nothing to show whether the same result holds in cases of unprotected consensual sexual intercourse. Those cases might be covered by a suitably modified version of the consent principle. If we want to argue for a pro-choice stance in relation to those cases, we may need to focus once more on the question of who or what bears a right to life.

Sunday, April 2, 2017

New Paper - Could there ever be an app for that? Consent Apps and the Problem of Sexual Assault




I have a new paper coming out in Criminal Law and Philosophy. The final version won't be out for a few weeks, but you can access a pre-publication version at the links below.

Title: Could there ever be an app for that? Consent Apps and the Problem of Sexual Assault
Journal: Criminal Law and Philosophy
Links: Official; Academia.edu; Philpapers
Abstract:  Rape and sexual assault are major problems. In the majority of rape and sexual assault cases consent is the central issue. Consent is, to borrow a phrase, the ‘moral magic’ that converts an impermissible act into a permissible one. In recent years, a handful of companies have tried to launch ‘consent apps’ which aim to educate young people about the nature of sexual consent and allow them to record signals of consent for future verification. Although ostensibly aimed at addressing the problems of rape and sexual assault on university campuses, these apps have attracted a number of critics. In this paper, I subject the phenomenon of consent apps to philosophical scrutiny. I argue that the consent apps that have been launched to date are unhelpful because they fail to address the landscape of ethical and epistemic problems that would arise in the typical rape or sexual assault case: they produce distorted and decontextualised records of consent which may in turn exacerbate the other problems associated with rape and sexual assault. Furthermore, because of the tradeoffs involved, it is unlikely that app-based technologies could ever be created that would significantly address the problems of rape and sexual assault. 
 
 

Friday, March 31, 2017

Robot Rights: Intelligent Machines (Panel Discussion)





I participated in a debate/panel discussion about robot rights at the Science Gallery (Trinity, Dublin) on the 29th March 2017. A video from the event is above. Here's the description from the organisers:

What if robots were truly intelligent and fully self aware? Would we give them equal rights and the same protection under the law as we provide ourselves? Should we? But if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?

Moderated by Lilian Alweiss from the philosophy department at Trinity College Dublin, panellists include Conor Mc Ginn, Mechanical & Engineering Department, Trinity College Dublin; John Danaher, Law department NUI Galway; and Eoghan O'Mahoney from McCann Fitzgerald.

Join us as we explore these issues as part of our HUMANS NEED NOT APPLY exhibition with a panel discussion featuring leaders in the fields of AI, ethics and law.

Tuesday, March 28, 2017

BONUS EPISODE - Pip Thornton on linguistic capitalism, Google's ad empire, fake news and poetry

slide1.jpg


[Note: This was previously posted on my Algocracy project blog; I'm cross-posting it here now. The audio quality isn't perfect but the content is very interesting. It is a talk by Pip Thornton, the (former) Research Assistant on the project].

My post as research assistant on the Algocracy & Transhumanism project at NUIG has come to an end. I have really enjoyed the five months I have spent here in Galway - I  have learned a great deal from the workshops I have been involved in, the podcasts I have edited, the background research I have been doing for John on the project, and also from the many amazing people I have met both in and outside the university.

I  have also had the opportunity to present my own research to a  wide audience and most recently gave a talk on behalf of the Technology and Governance research cluster entitled A Critique of Linguistic Capitalism (and an artistic intervention)  as part of a seminar series organised by the  Whitaker Institute's Ideas Forum,  which I managed to record.

Part of my research involves using poetry to critique linguistic capitalism and the way language is both written and read in an age of algorithmic reproduction. For the talk I invited Galway poet Rita Ann Higgins to help me explore the the differing 'value' of words, so the talk includes Rita Ann reciting an extract from her award winning poem Our Killer City, and my own imagining of what the poem 'sounds like' - or is worth, to Google. The argument central to my thesis is that the power held by the tech giant Google, as it mediates, manipulates and extracts economic value from the language (or more accurately the decontextualised linguistic data) which flows through its search, communication and advertising systems, needs both transparency and strong critique. Words are auctioned off to the highest bidder, and become little more than tools in the creation of advertising revenue. But there are significant side effects, which can be both linguistic and political. Fake news sites are big business for advertisers and Google, but also infect the wider discourse as they spread through social media networks and national consciousness. One of the big questions I am now starting to ask is just how resilient is language to this neoliberal infusion, and what could it mean politically? As the value of language shifts from conveyor of meaning to conveyor of capital, how long will it be before the linguistic bubble bursts?

You can download it HERE or listen below:



Track Notes



  • 0:00- introduction and background 4:30 - Google Search & autocomplete - digital language and semantic escorts 
  • 6:20 - Linguistic Capitalism and Google AdWords - the wisdom of a linguistic marketplace?
  • 9:30 - Google Ad Grants - politicising free ads: the Redirect Method, A Clockwork Orange and the neoliberal logic of countering extremism via Google search 
  • 16:00 - Google AdSense - fake news sites, click-bait and ad revenue  -  from Chicago ballot boxes to Macedonia - the ads are real but the news is fake 
  • 20:35 - Interventions #1 - combating AdSense (and Breitbart News) - the Sleeping Giants Twitter campaign 
  • 23:00 - Interventions #2 - Gmail and the American Psycho experiment 
  • 25:30 - Interventions #3 - my own {poem}.py project - critiquing AdWords using poetry, cryptography and a second hand receipt printer 
  • 30:00 - special guest poet Rita Ann Higgins reciting Our Killer City 
  • 33:30 - Conclusions - a manifestation of postmodernism? sub-prime language - when does the bubble burst? commodified words as the master's tools - problems  of method


Relevant Links


Monday, March 20, 2017

Abortion and the Violinist Thought Experiment




Here is a simple argument against abortion:


  • (1) If an entity (X) has a right to life, it is, ceteris paribus, not permissible to terminate that entity’s existence.
  • (2) The foetus has a right to life.
  • (3) Therefore, it is not permissible to kill or terminate the foetus’s existence.


Defenders of abortion will criticise at least one of the premises of this argument. Many will challenge premise (2). They will argue that the foetus is not a person and hence does not have a right to life. Anti-abortion advocates will respond by saying that it is person or that it has some other status that gives it a right to life. This gets us into some abstruse questions on the metaphysics of personhood and moral status.

The other pro-choice strategy is to challenge premise (1) and argue that there are exceptions to the principle in questions. Indeed, exceptions seem to abound. There are situations in which one right to life must be balanced against another and in those situations it is permissible for one individual to kill another. This is the typical case of self-defence: someone immediately and credibly threatens to end your life and the only way to neutralise that threat is to end theirs. Killing them is permissible in these circumstances. A pro-choice advocate might argue that there are some circumstances in which pregnancy is analogous to the typical case of self-defence, i.e. there are cases where the foetus poses an immediate and credible threat to the life of the mother and the only way to neutralise that threat is to end the life of the foetus.

The trickier scenario is where the mother’s life is unthreatened. In those cases, if the foetus has a right to life, anti-abortionists will argue that the following duty holds:

Gestational duty: If a woman’s life is unthreatened by her being pregnant, she has a duty to carry the foetus to term.

The rationale for this is that the woman’s right to control her body cannot trump the foetus’ right to life. In the moral pecking order, the right to life ranks higher than the right to do with one’s body as one pleases.

It is precisely this understanding of the gestational duty that Judith Jarvis Thomson challenged in her famous 1971 article ‘A Defense of Abortion’. She did so by way of some ingenious thought experiments featuring sick violinists, expanding babies and floating ‘people-seeds’. Much has been written about those thought experiments in the intervening years. I want to take a look at some recent criticism and commentary from John Martin Fischer. He tries to show that Thomson’s thought experiments don’t provide as much guidance for the typical case of pregnancy as we initially assume, but this, in turn, does not provide succour for the opponents of abortion.

I’ll divide my discussion up over two posts. In this post, I’ll look at Fischer’s analysis of the Violinist thought experiment. In the next one, I’ll look at his analysis of the ‘people seeds’ thought experiment.


1. The Violinist Thought Experiment
The most famous thought experiment from Thomson’s article is the one about the violinist. Even if you know nothing about the broader abortion debate, you have probably come across this thought experiment. Here it is in all its original glory:

The Violinist: ‘You wake up in the morning and find yourself back to back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, “Look, we’re sorry the Society of Music Lovers did this to you — we would never have permitted it if we had known. But still, they did it, and the violinist is now plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.”’ (1971: 132)

Do you have a duty to remain plugged into the violinist? Thomson argues that you don’t; that intuitively, in this case, it is permissible to unplug yourself from the violinist. That doesn’t mean we would praise you for doing it — we might think it is morally better for you to stay plugged in — but it does mean that we don’t think you are blameworthy for unplugging. In this case, your right to control your own body trumps the violinist’s right to life.

Where does that get us? The argument is that the case of the violinist is very similar to the case of pregnancy resulting from rape. In both cases you are involuntarily placed in position whereby somebody else’s life is dependent on being attached to your body for nine months. By analogy, if your right to control your own body trumps the violinist’s right to life, it will also trump the foetus’ right to life:


  • (4) In the violinist case, you have no duty to stay plugged into the violinist (i.e. your right to control your own body trumps his right to life).
  • (5) Pregnancy resulting from rape is similar to the violinist case in all important respects.
  • (6) Therefore, probably, you have no duty to carry the foetus to term in the case of pregnancy resulting from rape (i.e. your right to control your own body trumps the foetus’ right to life).


Since it will be useful for later purposes, I’ve tried to map the basic logic of this argument from analogy in the diagram below. The diagram is saying that the two cases are sufficiently similar so that it is reasonable to suppose that the moral principle that applies to the first case carries over to the second.



2. Fischer’s Criticism of the Violinist Thought Experiment
In his article, ‘Abortion and Ownership’, Fischer challenges Thomson’s intuitive reaction to The Violinist. His argumentative strategy is subtle and interesting. He builds up a chain of counter-analogies (i.e. analogies in which the opposite principle applies) and argues that they are sufficient to cast doubt on the conclusion that your right to control your own body trumps the violinist’s right to life.

He starts with a thought experiment from Joel Feinberg:

Cabin Case 1: “Suppose that you are on a backpacking trip in the high mountain country when an unanticipated blizzard strikes the area with such ferocity that your life is imperiled. Fortunately, you stumble onto an unoccupied cabin, locked and boarded up for the winter, clearly somebody else’s private property. You smash in a window, enter, and huddle in a corner for three days until the storm abates. During this period you help yourself to your unknown benefactor’s food supply and burn his wooden furniture in the fireplace to keep warm.” (Feinberg 1978, 102)

Feinberg thinks that in this case you have a right to break into the house and use the available resources. The problem is that this clearly violates the cabin-owner’s right to control their property. Still, the fact that you are justified in violating that right tells us something interesting. It tells us that, in this scenario, the right to life trumps the right to control one’s own property.

So what? The right of the cabin-owner to control his/her property is very different from your right to control your body (in the case of the violinist and pregnancy-from-rape). For one thing, the violation in the case of cabin owner is short-lived, only lasting three days, until the storm abates. Furthermore, it requires no immediate interference with their enjoyment of the property or with their bodies. We are explicitly told that the cabin is unoccupied at the time. So, on an initial glance, it doesn’t seem like the Cabin Case 1 tells us anything interesting about abortion.

Fischer begs to differ. He tries to construct a series of thought experiments that bridge the gap between the Cabin Case 1 and The Violinist. He does so by first imagining a case in which the property-owner is present at the time of the interference and in which the interference will continue for at least nine months:

Cabin Case 2: "You have secured a cabin in an extremely remote and inaccessible place in the mountains. You wish to be alone; you have enough supplies for yourself, and also some extras in case of an emergency. Unfortunately, a very evil man has kidnapped an innocent person and [left] him to die in the desolate mountain country near your cabin. The innocent person wanders for hours and finally happens upon your cabin…You can radio for help, but because of the remoteness and inaccessibility of your cabin and the relatively primitive technology of the country in which it is located, the rescue party will require nine months to reach your cabin…You can let the innocent stranger into your cabin and provide food and shelter until the rescue party arrives in nine months, or you can forcibly prevent him from entering your cabin and thus cause his death (or perhaps allow him to die)." (Fischer 1991, 6)

Fischers argues that, intuitively, in this case the innocent person still has the right to use your property and emergency resources and you have a duty of beneficence to them. In other words, their right to life trumps your right to control and use your property. Of course, a fan of Thomson’s original thought experiment might still resist this by arguing that the rights violation in this second Cabin Case is different because it does not involve any direct bodily interference. So Fischer comes up with a third variation that involves precisely that:

Cabin Case 3: The same scenario as Cabin Case 2, except that the innocent person is tiny and injured and would need to be carried around on your back for the nine-months. You are physically capable of doing this.

Fischer argues that the intuition doesn’t change in this case. He thinks we still have a duty of beneficence to the innocent stranger, despite the fact that it involves a nine-month interference with our right to control our properties and our bodies. The right to life still trumps both. This is important because Cabin Case 3 is, according to Fischer, very similar to the Violinist.

What Fischer is arguing, then, is sketched in the diagram below. He is arguing that the principle that applies in Cabin Case 1 carries over to Cabin Case 3 and that there is no relevant moral difference between Cabin Case 3 and the Violinist. Thomson’s original argument is, thereby, undermined.



For what it’s worth, I’m not entirely convinced by this line of reasoning. I don’t quite share Fischer’s intuition about Cabin Case 3. I think that if you really imagined the inconvenience and risk that would be involved in carrying another person around on your back for nine months you might not be so quick to imply a duty of beneficence. That reveals one of the big problems with this debate: esoteric thought experiments can generate different intuitive reactions.


3. What does this mean for abortion?
Let’s suppose Fischer is correct in his reasoning. What follows? One thing that follows is that the right to life trumps the right to control one’s body in the case of the Violinist. But does it thereby follow that the right to life trumps the right to control one’s body in the case of pregnancy from rape? Not necessarily. Fischer argues that there could be important differences between the two scenarios, overlooked in Thomson’s original discussion, that warrant a different conclusion in the rape scenario. A few examples spring to mind.

In the case of pregnancy resulting from rape, both the woman and the rapist will have a genetic link with the resulting child and will be its natural parents. The woman is likely to have some natural affection and feelings of obligation toward the child, but this may be tempered by the fact that the child (innocent and all as it is) is a potential reminder (trigger) of the trauma of the rape that led to its existence. The woman may give the child up for adoption — and thereby absolve herself of legal duties toward it — but this may not dissolve any natural feelings of affection and obligation.  Furthermore, the child may be curious about its biological parentage in later years and may seek a relationship with its natural mother or father (it may need to do so because it requires information about its genetic lineage). All of which is to say, that the relationship between the mother and child is very different from the relationship between you and the violinist or you and the tiny innocent person you have to carry on your back. Those relationships normatively and naturally dissolve after the nine-month period of dependency. This is not true in the case of the mother and her offspring. The interference with her rights lingers.

These differences may be sufficient to warrant a different conclusion in the case of pregnancy resulting from rape. But this is little advantage for the pro-choice advocate for it says nothing about other pregnancies. There are critics of abortion who are willing to concede that it should be an option in cases of rape. They argue that this doesn’t affect the gestational duty in the larger range of cases where pregnancy results from consensual sexual intercourse. That’s where Thomson’s other thought experiment (People Seeds) comes into play. I’ll look at that thought experiment, along with Fischer’s analysis of it, in the next post.

Tuesday, March 14, 2017

How to Plug the Robot Responsibility Gap




Killer robots. You have probably heard about them. You may also have heard that there is a campaign to stop them. One of the main arguments that proponents of the campaign make is that they will create responsibility gaps in military operations. The problem is twofold: (i) the robots themselves will not be proper subjects of responsibility ascriptions; and (ii) as they gain autonomy, there is more separation between what they do and the acts of the commanding officers or developers who allowed their use, and so less ground for holding these people responsible for what the robots do. A responsibility gap opens up.

The classic statement of this ‘responsibility gap’ argument comes from Robert Sparrow (2007, 74-75):

…the more autonomous these systems become, the less it will be possible to properly hold those who designed them or ordered their use responsible for their actions. Yet the impossibility of punishing the machine means that we cannot hold the machine responsible. We can insist that the officer who orders their use be held responsible for their actions, but only at the cost of allowing that they should sometimes be held entirely responsible for actions over which they had no control. For the foreseeable future then, the deployment of weapon systems controlled by artificial intelligences in warfare is therefore unfair either to potential casualties in the theatre of war, or to the officer who will be held responsible for their use.

This argument has been debated a lot since Sparrow first propounded it. What is often missing from those debates is some application of the legal doctrines of responsibility. Law has long dealt with analogous scenarios — e.g. people directing the actions of others to nefarious ends — and has developed a number of doctrines that plug the potential responsibility gaps that arise in these scenarios. What’s more, legal theorists and philosophers have long analysed the moral appropriateness of these doctrines, highlighting their weaknesses, and suggesting reforms that bring them into closer alignment with our intuitions of justice. Deeper engagement with these legal discussions could move the debate on killer robots and responsibility gaps forward.

Fortunately, some legal theorists have stepped up to the plate. Neha Jain is one example. In her recent paper ‘Autonomous weapons systems: new frameworks for individual responsibility’, she provides a thorough overview of the legal doctrines that could be used to plug the responsibility gap. There is a lot of insight to be gleaned from this paper, and I want to run through its main arguments in this post.


1. What is an autonomous weapons system anyway?

To get things started we need a sharper understanding of robot autonomy and the responsibility gap. We’ll being with the latter. The typical scenario that is imagined by proponents of the gap is where some military officer or commander has authorised the battlefield use of an autonomous weapons system (or AWS), that AWS has then used its lethal firepower to commit some act that, if it had been performed by a human combatant, would almost certainly be deemed criminal (or contrary to the laws of war).

There are two responsibility gaps that arise in this typical scenario. There is the gap between the robot and the criminal/illegal outcome. This gap arises because the robot cannot be a fitting subject for attributions of responsibility. I looked at the arguments that can be made in favour of this view before. It may be possible, one day, to create a robot that meets all the criteria for moral personhood, but this is not going to happen for a long time, and there may be reason to think that we would never take claims of robot responsibility seriously. The other gap arises because there is some normative distance between what the AWS did and the authorisation of the officer or commander. The argument here would be that the AWS did something that was not foreseeable or foreseen by the officer/commander, or acted beyond their control or authorisation. Thus, they cannot be fairly held responsible for what the robot did.

I have tried to illustrate this typical scenario, and the two responsibility gaps associated with it, in the diagram below. We will be focusing the gap between the officer/commander and the robot for the remainder of this post.



As you can see, the credibility of the responsibility gaps hinges on how autonomous the robots really are. This prompts the question: what do we mean when we ascribe ‘autonomy’ to a robot? There are two competing views. The first describes robot autonomy as being essentially analogous to human autonomy. This is called ‘strong autonomy’ in Jain’s paper:

Strong Robot Autonomy: A robotic system is strongly autonomous if it is ‘capable of acting for reasons that are internal to it and in light of its own experience’ (Jain 2016, 304).

If a robot has this type of autonomy it is, effectively, a moral agent, though perhaps not a responsible moral agent due to certain incapacities (more on this below). A responsibility gap then arises between a commander/officer and a strongly autonomous robot in much the same way that a responsibility gap arises between two human beings.

A second school of thought rejects this analogy-based approach to robot autonomy, arguing that when roboticists describe a system as ‘autonomous’ they are using the term in a distinct, non-analogous fashion. Jain refers to this as emergent autonomy:

Emergent Robot Autonomy: A robotic system is emergently autonomous if its behaviour is dependent on ‘sensor data (which can be unpredictable) and on stochastic (probability-based) reasoning that is used for learning and error correction’ (Jain 2016, 305)

This type of autonomy has more to do with the dynamic and adaptive capabilities of the robot, than with its powers of moral reasoning and its capacity for ‘free’ will. The robot is autonomous if it can be deployed in a variety of environments and can respond to the contingent variables in those environments in an adaptive manner. Emergent autonomy creates a responsibility gap because the behaviour of the robot is unpredictable and unforeseeable.

Jain’s goal is to identify legal doctrines that can be used to plug the responsibility gap no matter what type of autonomy we ascribe to the robotic system.


2. Plugging the Gap in the Case of Strong Autonomy
Suppose a robotic system is strongly autonomous. Does this mean that the officer/commander that deployed the system cannot be held responsible for what it does? No; in fact legal systems have long dealt with this problem, developing two distinct doctrines for dealing with it. The first is the doctrine of innocent agency or perpetration; the second is the doctrine of command responsibility.



The doctrine of innocent agency or perpetration is likely to be less familiar. It describes a scenario in which one human being (the principal) uses another human being (or, as we will see, a human-run organisational apparatus) to commit a criminal act on their behalf. Consider the following example:

Poisoning-via-child: Grace has grown tired of her husband. She wants to poison him. But she doesn’t want to administer the lethal dose herself. She mixes the poison in with sugar and she asks her ten-year-old son to ‘put some sugar in daddy’s tea’. He dutifully does so.

In this example, Grace has used another human being to commit a criminal act on her behalf. Clearly that human being is innocent — he did not know what he was really doing — so it would be unfair or inappropriate to hold him responsible (contrast with a hypothetical case in which Grace hired a hitman to do her bidding). Common law systems allow for Grace to be held responsible for the crime through the doctrine of innocent agency. This applies whenever one human being uses another human being with some dispositional or circumstantial incapacity for responsibility to perform a criminal act on their behalf. The classic cases involve taking advantage of another person’s mental illness, ignorance or juvenility.

Similarly, but perhaps more interestingly, there is the civil law doctrine of perpetration. This doctrine covers cases in which one individual (the indirect perpetrator) gets another (the direct perpetrator) to commit a criminal act on their behalf. The indirect perpetrator uses the direct perpetrator as a tool and hence the direct perpetrator must be at some sort of disadvantage or deficit relative to the indirect perpetrator. The German Criminal Code sets this out in Section 25 and has some interesting features:

Section 25 of the Strafgesetzbuch The Vordermann is the indirect perpetrator. He or she uses a Hintermann as a direct perpetrator. The Vordermann possesses Handlungsherrschaft (act hegemony) and exercises Willensherrschaft (domination) over the will of the Hintermann.

Three main types of willensherrschaft are recognised: (i) coercion; (ii) taking advantage of a mistake made by the hintermann or (iii) possessing control over some organisational apparatus (Organisationsherrschaft). The latter is particularly interesting because it allows us to imagine a case in which the direct perpetrator uses some bureaucratic agency to carry out their will. It is also interesting because Article 25 of the Rome Statute establishing the International Criminal Court recognises the doctrine of perpetration and the ICC has held in their decisions that it covers perpetration via organisational apparatus.

Let’s now bring it back to the issue at hand. How do these doctrine apply to killer robots and the responsibility gap? The answer should be obvious enough. If robots possess the strong form of autonomy, but they have some deficit that prevents them from being responsible moral agents, then they are, in effect, like the innocent agents or direct perpetrators. Their human officers/commanders can be held responsible for what they do, through the doctrine of perpetration, provided those officers/commanders intended for them to do what they did, or knew that they would do what they did.

The problem with this, however, is that it doesn’t cover scenarios in which the robot acts outside or beyond the authorisation of the officer/commander. To plug the gap in those cases you would probably need the doctrine of command responsibility. This is a better known doctrine, though it has been controversial. As Jain describes it, there are three basic features to command responsibility:

Command Responsibility: A doctrine allowing for ascriptions of responsibility in cases where (a) there is a superior-subordinate relationship where the superior has effective control over the subordinate; (b) the superior knew or had reason to know (or should have known) of the subordinates’ crimes and (c) the superior failed to control, prevent or punish the commission of the offences.

Command responsibility covers both military and civilian commanders, though it is usually applied more strictly in the case of military commanders. Civilian commanders must have known of the actions of the subordinates; military commanders can be held responsible for failing to know when they should have known (a so-called ‘negligence standard’).

Command responsibility is well-recognised in international law and has been enshrined in Article 28 of the Rome Statute on the International Criminal Court. For it to apply, there must be a causal connection between what the superior did (or failed to do) and the actions of the subordinates. There must also be some temporal coincidence between the superior’s control and the subordinates’ actions.
Again, we can see easily enough how this could apply to the case of the strongly autonomous robot. The commander that deploys that robot could held responsible for what it does if they have effective control over the robot, if they knew (or ought to have known) that it was doing something illegal, and if they failed to intervene and stop it from happening.

The problem with this, however, is that it assumes the robot acts in a rational and predictable manner — that its actions are ones that the commander could have known about and, perhaps, should have known about. If the robot is strongly autonomous, that might hold true; but if the robot is emergently autonomous, it might not.


3. Plugging the Gap in the Case of the Emergent Autonomy
So we come to the case of emergent autonomy. Recall, the challenge here is that the robot behaves in a dynamic and adaptive manner. It responds to its environment in a complex and unpredictable way. The way in which it adapts and responds may be quite opaque to its human commanders (and even its developers, if it relies on certain machine learning tools) and so they will be less willing and less able to second guess its judgments.

This creates serious problems when it comes to plugging the responsibility gap. Although we could imagine using the doctrines of perpetration and/or command responsibility once again, we would quickly be forced to ask whether it was right and proper to do so. The critical questions will relate to the mental element required by both doctrines. I was a little sketchy about this in the previous section. I need to be clearer now.

In criminal law, responsibility depends on satisfying certain mens rea (mental element) conditions for an offence. In other words, in order to be held responsible you must have intended, known, or been reckless/negligent with respect to some fact or other. In the case of murder, for example, you must have intended to kill or cause grievous bodily harm to another person. In the case of manslaughter (a lesser offence) you must have been reckless (or in some cases grossly negligent) with respect to the chances that your action might cause another’s death.

If we want to apply doctrines like command responsibility to the case of an emergently autonomous robot, we will have to do so via something like the recklessness or negligence mens rea standards. The traditional application of the perpetration doctrine does not allow for this. The principal or vordermann must have intention or knowledge with respect to the elements of the offence committed by the hintermann. The command responsibility doctrine does allow for the use of recklessness and negligence. In the case of civilian commanders, a recklessness mental element is required; in the case of military commanders, a negligence standard is allowed. So if we wanted to apply perpetration to emergently autonomous robots, we would have to lower the mens rea standard.



Even if we did that it might be difficult to plug the gap. Consider recklessness first. There is no uniform agreement on what this mental element entails. The uncontroversial part of it is that in order to be reckless one must have recognised and disregarded a substantial risk that the criminal act would occur. The controversy arises over the standards by which we assess whether there was a consciously disregarded substantial risk. Must the person whose conduct led to the criminal act have recognised the risk as substantial? Or must he/she simply have recognised a risk, leaving it up to the rest of us to decide whether the risk was substantial or not? It makes a difference. Some people might have different views on what kinds of risks are substantial. Military commanders, for instance, might have very different standards from civilian commanders or members of the general public. What we perceive to be a substantial risk might be par for the course for them.

There is also disagreement as to whether the defendant must consciously recognise the specific type of harm that occurred or whether it is enough that they recognised a general category of harm into which the specific harm fits. So, in the case of a military operation gone awry, must the commander have recognised the general risk of collateral damage or the specific risk that a particular, identified group of people, would be collateral damage? Again, it makes a big difference. If it is the more general category that must be recognised and disregarded, it will be easier to argue that commanders are reckless.

Similar considerations arise in the case of negligence. Negligence covers situations where risks were not consciously recognised and disregarded but ought to have been. It is all about standards of care and deviations therefrom. What would the reasonable person or, in the case of professionals, the reasonable professional have foreseen? Would the reasonable military commander have foreseen the risk of an AWS doing something untoward? What if it is completely unprecedented?

It seems obvious enough that the reasonable military commander must always foresee some risk when it comes to the use of AWSs. Military operations always carry some risk and AWSs are lethal weapons. But should that be enough for them to fall under the negligence standard? If we make it very easy for commanders to be held responsible, it could have a chilling effect on both the use and development of AWSs.

That might be welcomed by the Campaign against Killer Robots, but not everyone will be so keen. They will say that there are potential benefits to this technology (think about the arguments made in favour of self-driving cars) and that setting the mens rea standard too low will cut us off from these benefits.

Anyway, that’s it for this post.