Philosophical Disquisitions show

Philosophical Disquisitions

Summary: Interviews with experts about the philosophy of the future.

Podcasts:

 72 - Grief in the Time of a Pandemic | File Type: audio/mpeg | Duration: Unknown

Lots of people are dying right now. But people die all the time. How should we respond to all this death? In this episode I talk to Michael Cholbi about the philosophy of grief. Michael Cholbi is Professor of Philosophy at the University of Edinburgh. He has published widely in ethical theory, practical ethics, and the philosophy of death and dying. We discus the nature of grief, the ethics of grief and how grief might change in the midst of a pandemic. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes Topics discussed include... What is grief? What are the different forms of grief? Is grief always about death? Is grief a good thing? Is grief a bad thing? Does the cause of death make a difference to grief? How does the COVID 19 pandemic disrupt grief? What are the politics of grief? Will future societies memorialise the deaths of people in the pandemic?   Relevant Links Michael's Homepage Regret, Resilience and the Nature of Grief by Michael Finding the Good in Grief by Michael Grief's Rationality, Backward and Forward by Michael Coping with Grief: A Series of Philosophical Disquisitions by me Grieving alone — coronavirus upends funeral rites (Financial Times) Coronavirus: How Covid-19 is denying dignity to the dead in Italy (BBC) Why the 1918 Spanish flu defied both memory and imagination 100 years later, why don’t we commemorate the victims and heroes of ‘Spanish flu’? #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 71 - COVID 19 and the Ethics of Infectious Disease Control | File Type: audio/mpeg | Duration: Unknown

As nearly half the world's population is now under some form of quarantine or lockdown, it seems like an apt time to consider the ethics of infectious disease control measures of this sort. In this episode, I chat to Jonathan Pugh and Tom Douglas, both of whom are Senior Research Fellows at the Uehiro Centre for Practical Ethics in Oxford, about this very issue. We talk about the moral principles that should apply to our evaluation of infectious disease control and some of the typical objections to it. Throughout we focus specifically on some of different interventions that are being applied to tackle COVID-19. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes Topics covered include: Methods of infectious disease control Consequentialist justifications for disease control Non-consequentialist justifications The proportionality of disease control measures Could these measures stigmatise certain populations? Could they exacerbate inequality or fuel discrimination? Must we err on the side of precaution in the midst of a novel pandemic? Is ethical evaluation a luxury at a time like this? Relevant Links Jonathan Pugh's Homepage Tom Douglas's Homepage 'Pandemic Ethics: Infectious Pathogen Control Measures and Moral Philosophy' by Jonathan and Tom 'Justifications for Non-Consensual Medical Intervention: From Infectious Disease Control to Criminal Rehabilitation' by Jonathan and Tom 'Infection Control for Third-Party Benefit: Lessons from Criminal Justice' by Tom How Different Asian Countries Responded to COVID 19     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 70 - Ethics in the time of Corona | File Type: audio/mpeg | Duration: Unknown

Like almost everyone else, I have been obsessing over the novel coronavirus pandemic for the past few months. Given the dramatic escalation in the pandemic in the past week, and the tricky ethical questions it raises for everyone, I thought it was about time to do an episode about it. So I reached out to people on Twitter and Jeff Sebo kindly volunteered himself to join me for a conversation. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University. Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. This episode was put together in a hurry but I think it covers a lot of important ground. I hope you find it informative and useful. Be safe! You can download the episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Spotify, Stitcher and many over podcasting services (the RSS feed is here). Show Notes Topics covered include: Individual duties and responsibilities to stop the spread Medical ethics and medical triage Balancing short-term versus long-term interests Health versus well-being and other goods State responsibilities and the social safety net The duties of politicians and public officials The risk of authoritarianism and the erosion of democratic values Global justice and racism/xenophobia Our duties to frontline workers and vulnerable members of society Animal ethics and the risks of industrial agriculture The ethical upside of the pandemic: will this lead to more solidarity and sustainability? Pandemics and global catastrophic risks What should we be doing right now?   Some Relevant Links Jeff's webpage Patient 31 in South Korea The Duty to Vaccinate and collective action problems Italian medical ethics recommendations COVID 19 and the Impossibility of Morality The problem with the UK government's (former) 'herd immunity' approach A history of the Spanish Flu #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 69 - Wood on Sustainable Superabundance | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to David Wood. David is currently the chair of the London Futurists group and a full-time futurist speaker, analyst, commentator, and writer. He studied the philosophy of science at Cambridge University. He has a background in designing, architecting, implementing, supporting, and avidly using smart mobile devices. He is the author or lead editor of nine books including, "RAFT 2035", "The Abolition of Aging", "Transcending Politics", and "Sustainable Superabundance". We chat about the last book on this list -- Sustainable Superabundance -- and its case for an optimistic future. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes 0:00 - Introduction 1:40 - Who are the London Futurists? What do they do? 3:34 - Why did David write Sustainable Superabundance? 7:22 - What is sustainable superabundance? 11:05 - Seven spheres of flourishing and seven types of superabundance? 16:16 - Why is David a transhumanist? 20:20 - Dealing with two criticisms of transhumanism: (i) isn't it naive and polyannish? (ii) isn't it elitist, inegalitarian and dangerous? 30:00 - Key principles of transhumanism 34:52 - How will we address energy needs of the future? 40:35 - How optimistic can we really be about the future of energy? 46:20 - Dealing with pessimism about food production? 52:48 - Are we heading for another AI winter? 1:01:08 - The politics of superabundance - what needs to change?   Relevant Links David Wood on Twitter London Futurists website London Futurists Youtube Sustainable Superabundance by David Other books in the Transpolitica series To be a machine by Mark O'Connell Previous episode with James Hughes about techno-progressive transhumanism Previous episode with Rick Searle about the dark side of transhumanism #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 68- Earp on the Ethics of Love Drugs | File Type: audio/mpeg | Duration: Unknown

In this episode I talk (again) to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about his latest book, co-authored with Julian Savulescu, on love drugs. You can listen to the episode below or download it here. You can also subscribe to the podcast on Apple, Stitcher, Spotify and other leading podcasting services (the RSS feed is here). Show Notes 0:00 - Introduction 2:17 - What is love? (Baby don't hurt me) What is a love drug? 7:30 - What are the biological underpinnings of love? 10:00 - How constraining is the biological foundation to love? 13:45 - So we're not natural born monogamists or polyamorists? 17:48 - Examples of actual love drugs 23:32 - MDMA in couples therapy 27:55 - The situational ethics of love drugs 33:25 - The non-specific nature of love drugs 39:00 - The basic case in favour of love drugs 40:48 - The ethics of anti-love drugs 44:00 - The ethics of conversion therapy 48:15 - Individuals vs systemic change 50:20 - Do love drugs undermine autonomy or authenticity? 54:20 - The Vice of In-Principlism 56:30 - The future of love drugs   Relevant Links Brian's Academia.edu page (freely accessible papers) Brian's Researchgate page (freely accessible papers) Brian asking Sam Harris a question The book: Love Drugs or Love is the Drug 'Love and enhancement technology'by Brian Earp 'The Vice of In-principlism and the Harmfulness of Love' by me   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 67 - Rini on Deepfakes and the Epistemic Backstop | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow at the NYU Center for Bioethics, a postdoctoral research fellow in philosophy at Oxford University and a junior research fellow of Jesus College Oxford. We talk about the political and epistemological consequences of deepfakes. This is a fascinating and timely conversation. You can download this episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed here). Show Notes 0:00 - Introduction 3:20 - What are deepfakes? 7:35 - What is the academic justification for creating deepfakes (if any)? 11:35 - The different uses of deepfakes: Porn versus Politics 16:00 - The epistemic backstop and the role of audiovisual recordings 22:50 - Two ways that recordings regulate our testimonial practices 26:00 - But recordings aren't a window onto the truth, are they? 34:34 - Is the Golden Age of recordings over? 39:36 - Will the rise of deepfakes lead to the rise of epistemic elites? 44:32 - How will deepfakes fuel political partisanship? 50:28 - Deepfakes and the end of public reason 54:15 - Is there something particularly disruptive about deepfakes? 58:25 - What can be done to address the problem?   Relevant Links Regina's Homepage Regina's Philpapers Page "Deepfakes and the Epistemic Backstop" by Regina "Fake News and Partisan Epistemology" by Regina Jeremy Corbyn and Boris Johnson Deepfake Video "California’s Anti-Deepfake Law Is Far Too Feeble" Op-Ed in Wired #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 66 - Wong on Confucianism, Robots and Moral Deskilling | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes 0:00 - Introduction 2:56 - How do robots disrupt our moral lives? 7:18 - Robots and Moral Deskilling 12:52 - The Folk Model of Virtue Acquisition 21:16 - The Confucian approach to Ethics 24:28 - Confucianism versus the European approach 29:05 - Confucianism and situationism 34:00 - The Importance of Rituals 39:39 - A Confucian Response to Moral Deskilling 43:37 - Criticisms (moral silencing) 46:48 - Generalising the Confucian approach 50:00 - Do we need new Confucian rituals? Relevant Links Pak's homepage at the University of Hamburg Pak's Philpeople Profile "Rituals and Machines: A Confucian Response to Technology Driven Moral Deskilling" by Pak "Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?" by Pak "Consenting to Geoengineering" by Pak Episode 45 with Shannon Vallor on Technology and the Virtues #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 65 - Vold on How We Can Extend Our Minds With AI | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research Fellow at the Faculty of Philosophy, and a Digital Charter Fellow at the Alan Turing Institute. We talk about the ethics extended cognition and how it pertains to the use of artificial intelligence. This is a fascinating topic because it addresses one of the oft-overlooked effects of AI on the human mind. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes 0:00 - Introduction 1:55 - Some examples of AI cognitive extension 13:07 - Defining cognitive extension 17:25 - Extended cognition versus extended mind 19:44 - The Coupling-Constitution Fallacy 21:50 - Understanding different theories of situated cognition 27:20 - The Coupling-Constitution Fallacy Redux 30:20 - What is distinctive about AI-based cognitive extension? 34:20 - The three/four different ways of thinking about human interactions with AI 40:04 - Problems with this framework 49:37 - The Problem of Cognitive Atrophy 53:31 - The Moral Status of AI Extenders 57:12 - The Problem of Autonomy and Manipulation 58:55 - The policy implications of recognising AI cognitive extension   Relevant Links Karina's homepage Karina at the Leverhulme Centre for the Future of Intelligence "AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI" by José Hernández Orallo and Karina Vold "The Parity Argument for Extended Consciousness" by Karina "Are ‘you’ just inside your skin or is your smartphone part of you?" by Karina "The Extended Mind" by Clark and Chalmers Theory and Application of the Extended Mind (series by me) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Mass Surveillance, Artificial Intelligence and New Legal Challenges | File Type: audio/mpeg | Duration: Unknown

[This is the text of a talk I gave to the Irish Law Reform Commission Annual Conference in Dublin on the 13th of November 2018. You can listen to an audio version of this lecture here or using the embedded player above.] In the mid-19th century, a set of laws were created to address the menace that newly-invented automobiles and locomotives posed to other road users. One of the first such laws was the English The Locomotive Act 1865, which subsequently became known as the ‘Red Flag Act’. Under this act, any user of a self-propelled vehicle had to ensure that at least two people were employed to manage the vehicle and that one of these persons: “while any locomotive is in motion, shall precede such locomotive on foot by not less than sixty yards, and shall carry a red flag constantly displayed, and shall warn the riders and drivers of horses of the approach of such locomotives…” The motive behind this law was commendable. Automobiles did pose a new threat to other, more vulnerable, road users. But to modern eyes the law was also, clearly, ridiculous. To suggest that every car should be preceded by a pedestrian waving a red flag would seem to defeat the point of having a car: the whole idea is that it is faster and more efficient than walking. The ridiculous nature of the law eventually became apparent to its creators and all such laws were repealed in the 1890s, approximately 30 years after their introduction.[1] The story of the Red Flag laws shows that legal systems often get new and emerging technologies badly wrong. By focusing on the obvious or immediate risks, the law can neglect the long-term benefits and costs. I mention all this by way of warning. As I understand it, it has been over 20 years since the Law Reform Commission considered the legal challenges around privacy and surveillance. A lot has happened in the intervening decades. My goal in this talk is to give some sense of where we are now and what issues may need to be addressed over the coming years. In doing this, I hope not to forget the lesson of the Red Flag laws. 1. What’s changed?   Let me start with the obvious question. What has changed, technologically speaking, since the LRC last considered issues around privacy and surveillance? Two things stand out. First, we have entered an era of mass surveillance. The proliferation of digital devices — laptops, computers, tablets, smart phones, smart watches, smart cars, smart fridges, smart thermostats and so forth — combined with increased internet connectivity has resulted in a world in which we are all now monitored and recorded every minute of every day of our lives. The cheapness and ubiquity of data collecting devices means that it is now, in principle, possible to imbue every object, animal and person with some data-monitoring technology. The result is what some scholars refer to as the ‘internet of everything’ and with it the possibility of a perfect ‘digital panopticon’. This era of mass surveillance puts increased pressure on privacy and, at least within the EU, has prompted significant legislative intervention in the form of the GDPR. Second, we have created technologies that can take advantage of all the data that is being collected. To state the obvious: data alone is not enough. As all lawyers know, it is easy to befuddle the opposition in a complex law suit by ‘dumping’ a lot of data on them during discovery. They drown in the resultant sea of information. It is what we do with the data that really matters. In this respect, it is the marriage of mass surveillance with new kinds of artificial intelligence that creates the new legal challenges that we must now tackle with some urgency. Artificial intelligence allows us to do three important things with the vast quantities of data that are now being collected: (i) It enables new kinds of pattern matching - what I mean here is that AI systems can spot patterns in data that were historically difficult for computer systems to spot (e.g. image or voice recognition), and that may also be difficult, if not impossible, for humans to spot due to their complexity. To put it another way, AI allows us to understand data in new ways. (ii) It enables the creation of new kinds of informational product - what I mean here is that the AI systems don’t simply rebroadcast, dispassionate and objective forms of the data we collect. They actively construct and reshape the data into artifacts that can be more or less useful to humans. (iii) It enables new kinds of action and behaviour - what I mean here is that the informational products created by these AI systems are not simply inert artifacts that we observe with bemused detachment. They are prompts to change and alter human behaviour and decision-making. On top of all this, these AI systems do these things with increasing autonomy (or, less controversially, automation). Although humans do assist the AI systems in both understanding, constructing and acting on foot of the data being collected, advances in AI and robotics make it increasingly possible for machines to do things without direct human assistance or intervention. It is these ways of using data, coupled with increasing automation, that I believe give rise to the new legal challenges. It is impossible for me to cover all of these challenges in this talk. So what I will do instead is to discuss three case studies that I think are indicative of the kinds of challenges that need to be addressed, and that correspond to the three things we can now do with the data that we are collecting. 2. Case Study: Facial Recognition Technology The first case study has to do with facial recognition technology. This is an excellent example of how AI can understand data in new ways. Facial recognition technology is essentially like fingerprinting for the face. From a selection of images, an algorithm can construct a unique mathematical model of your facial features, which can then be used to track and trace your identity across numerous locations. The potential conveniences of this technology are considerable: faster security clearance at airports; an easy way to record and confirm attendance in schools; an end to complex passwords when accessing and using your digital services; a way for security services to track and identify criminals; a tool for locating missing persons and finding old friends. Little surprise then that many of us have already welcomed the technology into our lives. It is now the default security setting on the current generation of smartphones. It is also being trialled at airports (including Dublin Airport),[2] train stations and public squares around the world. It is cheap and easily plugged into existing CCTV surveillance systems. It can also take advantage of the vast databases of facial images collected by governments and social media engines. Despite its advantages, facial recognition technology also poses a significant number of risks. It enables and normalises blanket surveillance of individuals across numerous environments. This makes it the perfect tool for oppressive governments and manipulative corporations. Our faces are one of our most unique and important features, central to our sense of who we are and how we relate to each other — think of the Beatles immortal line ‘Eleanor Rigby puts on the face that she keeps in the jar by the door’ — facial recognition technology captures this unique feature and turns into a digital product that can be copied and traded, and used for marketing, intimidation and harassment. Consider, for example, the unintended consequences of the FindFace app that was released in Russia in 2016. Intended by its creators to be a way of making new friends, the FindFace app matched images on your phone with images in social media databases, thus allowing you to identify people you may have met but whose names you cannot remember. Suppose you met someone at a party, took a picture together with them, but then didn’t get their name. FindFace allows you use the photo to trace their real identity.[3] What a wonderful idea, right? Now you need never miss out on an opportunity for friendship because of oversight or poor memory. Well, as you might imagine, the app also has a dark side. It turns out to be the perfect technology for stalkers, harassers and doxxers (the internet slang for those who want to out people’s real world identities). Anyone who is trying to hide or obscure their identity can now be traced and tracked by anyone who happens to take a photograph of them. What’s more, facial recognition technology is not perfect. It has been shown to be less reliable when dealing with non-white faces, and there are several documented cases in which it matches the wrong faces, thus wrongly assuming someone is a criminal when they are not. For example, many US drivers have had their licences cancelled because an algorithm has found two faces on a licence database to be suspiciously similar and has then wrongly assumed the people in question to be using a false identity. In another famous illustration of the problem, 28 members of the US congress (most of them members of racial minorities), were falsely matched with criminal mugshots using facial recognition technology created by Amazon.[4] As some researchers have put it, the widespread and indiscriminate use of facial recognition means that we are all now part of a perpetual line-up that is both biased and error prone.[5] The conveniences of facial recognition thus come at a price, one that often only becomes apparent when something goes wrong, and is more costly for some social groups than others. What should be done about this from a legal perspective? The obvious answer is to carefully regulate the technology to manage its risks and opportunities. This is, in a sense, what is already being done under the GDPR. Article 9 of the GDPR stipulates that facial recognition is a kind of biometric data that is subject to special protections. The default position is that it should not be collected, but this is subject to a long list of qualifications and exceptions. It is, for example, permissible to collect it if the data has already been made public, if you get the explicit consent of the person, if it serves some legitimate public interest, if it is medically necessary or necessary for public health reasons, if it is necessary to protect other rights and so on. Clearly the GDPR does restrict facial recognition in some ways. A recent Swedish case fined a school for the indiscriminate use of facial recognition for attendance monitoring.[6] Nevertheless, the long list of exceptions makes the widespread use of facial recognition not just a possibility but a likelihood. This is something the EU is aware of and in light of the Swedish case they have signalled an intention to introduce stricter regulation of facial recognition. This is something we in Ireland should also be considering. The GDPR allows states to introduce stricter protections against certain kinds of data collection. And, according to some privacy scholars, we need the strictest possible protections to to save us from the depredations of facial recognition. Woodrow Hartzog, one of the foremost privacy scholars in the US, and Evan Selinger, a philosopher specialising in the ethics of technology, have recently argued that facial recognition technology must be banned. As they put it (somewhat alarmingly):[7] “The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.” They caution against anyone who thinks that the technology can be procedurally regulated, arguing that governmental and commercial interests will always lobby for expansion of the technology beyond its initially prescribed remit. They also argue that attempts at informed consent will be (and already are) a ‘spectacular failure’ because people don’t understand what they are consenting to when they give away their facial fingerprint. Some people might find this call for a categorical ban extreme, unnecessary and impractical. Why throw the baby out with the bathwater and other cliches to that effect. But I would like to suggest that there is something worth taking seriously here, particularly since facial recognition technology is just the tip of the iceberg of data collection. People are already experimenting with emotion recognition technology, which uses facial images to predict future behaviour in real time, and there are many other kinds of sensitive data that are being collected, digitised and traded. Genetic data is perhaps the most obvious other example. Given that data is what fuels the fire of AI, it is possible that we should consider cutting off some of the fuel supply in its entirety. 3. Case Study: Deepfakes Let me move on to my second case study. This one has to do with how AI is used to create new informational products from data. As an illustration of this I will focus on so-called ‘deepfake’ technology. This is a machine learning technique that allows you to construct realistic synthetic media from databases of images and audio files. The most prevalent use of deepfakes is, perhaps unsurprisingly, in the world of pornography, where the faces of famous actors have been repeatedly grafted onto porn videos. This is disturbing and makes deepfakes an ideal technology for ‘synthetic’ revenge porn. Perhaps more socially significant than this, however, are the potential political uses of deepfake technology. In 2017, a team of researchers at the University of Washington created a series of deepfake videos of Barack Obama which I will now play for you.[8] The images in these videos are artificial. They haven’t been edited together from different clips. They have been synthetically constructed by an algorithm from a database of audiovisual materials. Obviously, the video isn’t entirely convincing. If you look and listen closely you can see that there is something stilted and artificial about it. In addition to this it uses pre-recorded audio clips to sync to the synthetic video. Nevertheless, if you weren’t looking too closely, you might be convinced it was real. Furthermore, there are other teams working on using the same basic technique to create synthetic audio too. So, as the technology improves, it could be very difficult for even the most discerning viewers to tell the difference between fiction and reality. Now there is nothing new about synthetic media. With the support of the New Zealand Law Foundation, Tom Barraclough and Curtis Barnes have published one of the most detailed investigations into the legal policy implications of deepfake technology.[9] In their report, they highlight the fact that an awful lot of existing audiovisual media is synthetic: it is all processed, manipulated and edited to some degree. There is also a long history of creating artistic and satirical synthetic representations of political and public figures. Think, for example, of the caricatures in Punch magazine or in the puppet show Spitting Image. Many people who use deepfake technology to create synthetic media will, no doubt, claim a legitimate purpose in doing so. They will say they are engaging in legitimate satire or critique, or producing works of artistic significance. Nevertheless, there does seem to be something worrying about deepfake technology. The highly realistic nature of the audiovisual material being created makes it the ideal vehicle for harassment, manipulation, defamation, forgery and fraud. Furthermore, the realism of the resultant material also poses significant epistemic challenges for society. The philosopher Regina Rini captures this problem well. She argues that deepfake technology poses a threat to our society’s ‘epistemic backstop’. What she means is that as a society we are highly reliant on testimony from others to get by. We rely on it for news and information, we use it to form expectations about the world and build trust in others. But we know that testimony is not always reliable. Sometimes people will lie to us; sometimes they will forget what really happened. Audiovisual recordings provide an important check on potentially misleading forms of testimony. They encourage honesty and competence. As Rini puts it:[10] “The availability of recordings undergirds the norms of testimonial practice…Our awareness of the possibility of being recorded provides a quasi-independent check on reckless testifying, thereby strengthening the reasonability of relying on the words of others. Recordings do this in two distinctive ways: actively correcting errors in past testimony and passively regulating ongoing testimonial practices.” The problem with deepfake technology is that it undermines this function. Audiovisual recordings can no longer provide the epistemic backstop that keeps us honest. What does this mean for the law? I am not overly concerned about the impact of deepfake technology on legal evidence-gathering practices. The legal system, with its insistence on ‘chain of custody’ and testimonial verification of audiovisual materials, is perhaps better placed than most to deal with the threat of deepfakes (though there will be an increased need for forensic experts to identify deepfake recordings in court proceedings). What I am more concerned about is how deepfake technologies will be weaponised to harm and intimidate others — particularly members of vulnerable populations. The question is whether anything can be done to provide legal redress for these problems? As Barraclough and Barnes point out in their report, it is exceptionally difficult to legislate in this area. How do you define the difference between real and synthetic media (if at all)? How do you balance the free speech rights against the potential harms to others? Do we need specialised laws to do this or are existing laws on defamation and fraud (say) up to the task? Furthermore, given that deepfakes can be created and distributed by unknown actors, who would the potential cause of action be against? These are difficult questions to answer. The one concrete suggestion I would make is that any existing or proposed legislation on ‘revenge porn’ should be modified so that it explicitly covers the possibility of synthetic revenge porn. Ireland is currently in the midst of legislating against the nonconsensual sharing of ‘intimate images’ in the Harassment, Harmful Communications and Related Offences Bill. I note that the current wording of the offence in section 4 of the Bill covers images that have been ‘altered’ but someone might argue that synthetically constructed images are not, strictly speaking, altered. There may be plans to change this wording to cover this possibility — I know that consultations and amendments to the Bill are ongoing[11] — but if there aren’t then I suggest that there should be. To reiterate, I am using deepfake technology as an illustration of a more general problem. There are many other ways in which the combination data and AI can be used to mess with the distinction between fact and fiction. The algorithmic curation and promotion of fake news, for example, or the use of virtual and augmented reality to manipulate our perception of public and private spaces, both pose significant threats to property rights, privacy rights and political rights. We need to do something to legally manage this brave new (technologically constructed) world. 4. Case Study: Algorithmic Risk Prediction Let me turn turn now to my final case study. This one has to do with how data can be used to prompt new actions and behaviours in the world. For this case study, I will look to the world of algorithmic risk prediction. This is where we take a collection of datapoints concerning an individual’s behaviour and lifestyle and feed it into an algorithm that can make predictions about their likely future behaviour. This is a long-standing practice in insurance, and is now being used in making credit decisions, tax auditing, child protection, and criminal justice (to name but a few examples). I’ll focus on its use in criminal justice for illustrative purposes. Specifically, I will focus on the debate surrounding the COMPAS algorithm, that has been used in a number of US states. The COMPAS algorithm (created by a company called Northpointe, now called Equivant) uses datapoints to generate a recidivism risk score for criminal defendants. The datapoints include things like the person’s age at arrest, their prior arrest/conviction record, the number of family members who have been arrested/convicted, their address, their education and job and so on. These are then weighted together using an algorithm to generate a risk score. The exact weighting procedure is unclear, since the COMPAS algorithm is a proprietary technology, but the company that created it has released a considerable amount of information about the datapoints it uses into the public domain. If you know anything about the COMPAS algorithm you will know that it has been controversial. The controversy stems from two features of how the algorithm works. First, the algorithm is relatively opaque. This is a problem because the fair administration of justice requires that legal decision-making be transparent and open to challenge. A defendant has a right to know how a tribunal or court arrived at its decision and to challenge or question its reasoning. If this information isn’t known — either because the algorithm is intrinsically opaque or has been intentionally rendered opaque for reasons of intellectual property — then this principle of fair administration is not being upheld. This was one of the grounds on which the use of COMPAS algorithm was challenged in the US case of Loomis v Wisconsin.[12] In that case, the defendant, Loomis, challenged his sentencing decision on the basis that the trial court had relied on the COMPAS risk score in reaching its decision. His challenge was ultimately unsuccessful. The Wisconsin Supreme Court reasoned that the trial court had not relied solely on the COMPAS risk score in reaching its decision. The risk score was just one input into the court’s decision-making process, which was itself transparent and open to challenge. That said, the court did agree that courts should be wary when relying on such algorithms and said that warnings should be attached to the scores to highlight their limitations. The second controversy associated with the COMPAS algorithm has to do with its apparent racial bias. To understand this controversy I need to say a little bit more about how the algorithm works. Very roughly, the COMPAS algorithm is used to sort defendants into to outcome ‘buckets’: a 'high risk' reoffender bucket or a 'low risk' reoffender bucket. A number of years back a group of data journalists based at ProPublica conducted an investigation into which kinds of defendants got sorted into those buckets. They discovered something disturbing. They found that the COMPAS algorithm was more likely to give black defendants a false positive high risk score and more likely to give white defendants a false negative low risk score. The exact figures are given in the table below. Put another way, the COMPAS algorithm tended to rate black defendants as being higher risk than they actually were and white defendants as being lower risk than they actually were. This was all despite the fact that the algorithm did not explicitly use race as a criterion in its risk scores. Needless to say, the makers of the COMPAS algorithm were not happy about this finding. They defended their algorithm, arguing that it was in fact fair and non-discriminatory because it was well calibrated. In other words, they argued that it was equally accurate in scoring defendants, irrespective of their race. If it said a black defendant was high risk, it was right about 60% of the time and if it said that a white defendant was high risk, it was right about 60% of the time. This turns out to be true. The reason why it doesn't immediately look like it is equally accurate upon a first glance at the relevant figures is that there are a lot more black defendants than white defendants -- an unfortunate feature of the US criminal justice system that is not caused by the algorithm but is, rather, a feature the algorithm has to work around. So what is going on here? Is the algorithm fair or not? Here is where things get interesting. Several groups of mathematicians analysed this case and showed that the main problem here is that the makers of COMPAS and the data journalists were working with different conceptions of fairness and that these conceptions were fundamentally incompatible. This is something that can be formally proved. The clearest articulation of this proof can be found in a paper by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan.[13] To simplify their argument, they said that there are two things you might want a fair decision algorithm to do: (i) you might want it to be well-calibrated (i.e. equally accurate in its scoring irrespective of racial group); (ii) you might want it to achieve an equal representation for all groups in the outcome buckets. They then proved that except in two unusual cases, it is impossible to satisfy both criteria. The two unusual cases are when the algorithm is a 'perfect predictor' (i.e. it always get things right) or, alternatively, when the base rates for the relevant populations are the same (e.g. there are the same number of black defedants as there are white defendants). Since no algorithmic decision procedure is a perfect predictor, and since our world is full of base rate inequalities, this means that no plausible real-world use of a predictive algorithm is likely to be perfectly fair and non-discriminatory. What's more, this is generally true for all algorithmic risk predictions and not just true for cases involving recidivism risk. If you would like to see a non-mathematical illustration of the problem, I highly recommend checking out a recent article in the MIT Technology Review which includes a game you can play using the COMPAS algorithm and which illustrates the hard tradeoff between different conceptions of fairness.[14] What does all this mean for the law? Well, when it comes to the issue of transparency and challengeability, it is worth noting that the GDPR, in articles 13-15 and article 22, contains what some people refer to as a ‘right to explanation’. It states that, when automated decision procedures are used, people have a right to access meaningful information about the logic underlying the procedures. What this meaningful information looks like in practice is open to some interpretation, though there is now an increasing amount of guidance from national data protection units about what is expected.[15] But in some ways this misses the deeper point. Even if we make these procedures perfectly transparent and explainable, there remains the question about how we manage the hard tradeoff between different conceptions of fairness and non-discrimination. Our legal conceptions of fairness are multidimensional and require us to balance competing interests. When we rely on human decision-makers to determine what is fair, we accept that there will be some fudging and compromise involved. Right now, we let this fudging take place inside the minds of the human decision-makers, oftentimes without questioning it too much or making it too explicit. The problem with algorithmic risk predictions is that they force us to make this fudging explicit and precise. We can no longer pretend that the decision has successfully balanced all the competing interests and demands. We have to pick and choose. Thus, in some ways, the real challenge with these systems is not that they are opaque and non-transparent but, rather, that when they are transparent they force us to make hard choices. To some, this is the great advantage of algorithmic risk prediction. A paper by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan and Cass Sunstein entitled ‘Discrimination in the Age of the Algorithm’ makes this very case.[16] They argue that the real problem at the moment is that decision-making is discriminatory and its discriminatory nature is often implicit and hidden from view. The widespread use of transparent algorithms will force it into the open where it can be washed by the great disinfectant of sunlight. But I suspect others will be less sanguine about this new world of algorithmically mediated justice. They will argue that human-led decision-making, with its implicit fudging, is preferable, partly because it allows us to sustain the illusion of justice. Which world do we want to live in? The transparent and explicit world imagined by Kleinberg et al, or the murky and more implicit world of human decision-making? This is also a key legal challenge for the modern age. 5. Conclusion It’s time for me to wrap up. One lingering question you might have is whether any of the challenges outlined above are genuinely new. This is a topic worth debating. In one sense, there is nothing completely new about the challenges I have just discussed. We have been dealing with variations of them for as long as humans have lived in complex, literate societies. Nevertheless, there are some differences with the past. There are differences of scope and scale — mass surveillance and AI enables collection of data at an unprecedented scale and its use on millions of people at the same time. There are differences of speed and individuation — AI systems can update their operating parameters in real time and in highly individualised ways. And finally, there are the crucial differences in the degree of autonomy with which these systems operate, which can lead to problems in how we assign legal responsibility and liability. Endnotes [1] I am indebted to Jacob Turner for drawing my attention to this story. He discusses it in his book Robot Rules - Regulating Artificial Intelligence (Palgrave MacMillan, 2018). This is probably the best currently available book about Ai and law.  [2] See https://www.irishtimes.com/business/technology/airport-facial-scanning-dystopian-nightmare-rebranded-as-travel-perk-1.3986321; and https://www.dublinairport.com/latest-news/2019/05/31/dublin-airport-participates-in-biometrics-trial  [3] https://arstechnica.com/tech-policy/2016/04/facial-recognition-service-becomes-a-weapon-against-russian-porn-actresses/#  [4] This was a stunt conducted by the ACLU. See here for the press release https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28  [5] https://www.perpetuallineup.org/  [6] For the story, see here https://www.bbc.com/news/technology-49489154  [7] Their original call for this can be found here: https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66  [8] The video can be found here; https://www.youtube.com/watch?v=UCwbJxW-ZRg; For more information on the research see here: https://www.washington.edu/news/2017/07/11/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video/; https://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf  [9] The full report can be found here: https://static1.squarespace.com/static/5ca2c7abc2ff614d3d0f74b5/t/5ce26307ad4eec00016e423c/1558340402742/Perception+Inception+Report+EMBARGOED+TILL+21+May+2019.pdf  [10] The paper currently exists in a draft form but can be found here: https://philpapers.org/rec/RINDAT  [11] https://www.dccae.gov.ie/en-ie/communications/consultations/Pages/Regulation-of-Harmful-Online-Content-and-the-Implementation-of-the-revised-Audiovisual-Media-Services-Directive.aspx  [12] For a summary of the judgment, see here: https://harvardlawreview.org/2017/03/state-v-loomis/  [13] “Inherent Tradeoffs in the Fair Determination of Risk Scores” - available here https://arxiv.org/abs/1609.05807  [14] The article can be found at this link - https://www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm/  [15] Casey et al ‘Rethinking Explainabie Machines’ - available here https://scholarship.law.berkeley.edu/btlj/vol34/iss1/4/  [16] An open access version of the paper can be downloaded here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3329669 #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Escaping Skinner's Box: AI and the New Era of Techno-Superstition | File Type: audio/mpeg | Duration: Unknown

[The following is the text of a talk I delivered at the World Summit AI on the 10th October 2019. The talk is essentially a nugget taken from my new book Automation and Utopia. It's not an excerpt per se, but does look at one of the key arguments I make in the book. You can listen to the talk using the plugin above or download it here.] The science fiction author Arthur C. Clarke once formulated three “laws” for thinking about the future. The third law states that “any sufficiently advanced technology is indistinguishable from magic”. The idea, I take it, is that if someone from the Paleolithic was transported to the modern world, they would be amazed by what we have achieved. Supercomputers in our pockets; machines to fly us from one side of the planet to another in less than a day; vaccines and antibiotics to cure diseases that used to kill most people in childhood. To them, these would be truly magical times. It’s ironic then that many people alive today don’t see it that way. They see a world of materialism and reductionism. They think we have too much knowledge and control — that through technology and science we have made the world a less magical place. Well, I am here to reassure these people. One of the things AI will do is re-enchant the world and kickstart a new era of techno-superstition. If not for everyone, then at least for most people who have to work with AI on a daily basis. The catch, however, is that this is not necessarily a good thing. In fact, it is something we should worry about. Let me explain by way of an analogy. In the late 1940s, the behaviorist psychologist BF Skinner — famous for his experiments on animal learning —got a bunch of pigeons and put them into separate boxes. Now, if you know anything about Skinner you’ll know he had a penchant for this kind of thing. He seems to have spent his adult life torturing pigeons in boxes. Each box had a window through which a food reward would be presented to the bird. Inside the box were different switches that the pigeons could press with their beaks. Ordinarily, Skinner would set up experiments like this in such a way that pressing a particular sequence of switches would trigger the release of the food. But for this particular experiment he decided to do something different. He decided to present the food at random intervals, completely unrelated to the pressing of the switches. He wanted to see what the pigeons would do as a result. The findings were remarkable. Instead of sitting idly by and waiting patiently for their food to arrive, the pigeons took matters into their own hands. They flapped their wings repeatedly, they danced around in circles, they hopped on one foot, convinced that their actions had something to do with the presentation of the food reward. Skinner and his colleagues likened what the pigeons were doing to the ‘rain dances’ performed by various tribes around the world: they were engaging in superstitious behaviours to control an unpredictable and chaotic environment. It’s important that we think about this situation from the pigeon’s perspective. Inside the Skinner box, they find themselves in an unfamiliar world that is deeply opaque to them. Their usual foraging tactics and strategies don’t work. Things happen to them, food gets presented, but they don’t really understand why. They cannot cope with the uncertainty; their brains rush to fill the gap and create the illusion of control. Now what I want to argue here is that modern workers, and indeed all of us, in an environment suffused with AI, can end up sharing the predicament of Skinner’s pigeons. We can end up working inside boxes, fed information and stimuli by artificial intelligence. And inside these boxes, stuff can happen to us, work can get done, but we are not quite sure if or how our actions make a difference. We end up resorting to odd superstitions and rituals to make sense of it all and give ourselves the illusion of control, and one of the things I worry about, in particular, is that a lot of the current drive for transparent or explainable AI will reinforce this phenomenon. This might sound far-fetched, but it’s not. There has been a lot of talk in recent years about the ‘black box’ nature of many AI-systems. For example, the machine learning systems used to support risk assessments in bureaucratic, legal and financial settings. These systems all work in the same way. Data from human behaviour gets fed into them, and they then spit out risk scores and recommendations to human decision-makers. The exact rationale for those risk scores — i.e. the logic the systems use — is often hidden from view. Sometimes this is for reasons intrinsic to the coding of the algorithm; other times it is because it is deliberately concealed or people just lack the time, inclination or capacity to decode the system. The metaphor of the black box, useful though it is, is, however, misleading in one crucial respect: It assumes that the AI is inside the box and we are the ones trying to look in from the outside. But increasingly this is not the case. Increasingly, it is we who are trapped inside the box, being sent signals and nudges by the AI, and not entirely sure what is happening outside. Consider the way credit-scoring algorithms work. Many times neither the decision-maker (the human in the loop) nor the person affected knows why they get the score they do. The systems are difficult to decode and often deliberately concealed to prevent gaming. Nevertheless, the impact of these systems on human behaviour is profound. The algorithm constructs a game in which humans have to act within the parameters set by the algorithm to get a good score. There are many websites dedicated to helping people reverse engineer these systems, often giving dubious advice about behaviours and rituals you must follow to improve your scores. If you follow this advice, it is not too much of a stretch to say that you end up like one Skinner’s pigeons - flapping your wings to maintain some illusion of control. Some of you might say that this is an overstatement. The opaque nature of AI is a well-known problem and there are now a variety of technical proposals out there for making it less opaque and more “explainable” [some of which have been discussed here today]. These technical proposals have been accompanied by increased legal safeguards that mandate greater transparency. But we have to ask ourselves a question: will these solutions really work? Will they help ordinary people to see outside the box and retain some meaningful control and understanding of what is happening to them? A recent experiment by Ben Green and Yiling Chen from Harvard tried to answer these questions. It looked at how human decision-makers interact with risk assessment algorithms in criminal justice and finance (specifically in making decisions about pretrial release of defendants and the approval loan applications). Green and Chen created their own risk assessment systems, based on some of the leading commercially available models. They then got a group of experimental subjects (recruited via Amazon’s Mechanical Turk) to use these algorithms to make decisions under a number of different conditions. I won’t go through all the conditions here, but I will describe the four most important. In the first condition, the experimental subjects were just given the raw score provided by the algorithm and asked to make a decision on foot of this; in the second they were asked to give their own prediction initially and then update it after being given the algorithm’s prediction; in the third they were given the algorithm’s score, along with an explanation of how that score was derived, and asked to make a choice; and in the fourth they were given the opportunity to learn how accurate the algorithm was based on real world results (did someone default on their loan or not; did they show up to their trial or not). The question was: how would the humans react to these different scenarios? Would giving them more information improve the accuracy, reliability and fairness of their decision-making? The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency. It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control. Now, the original title of my talk promised five reasons for pessimism about AI in the workplace. But what we have here is one big reason that breaks down into five sub-reasons. Let me explain what I mean. The problem of techno-superstitionism stems from two related problems: (i) a lack of understanding/knowledge of how the world (in this case the AI system) works and (ii) the illusion of control over that system. These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent. Related to this is the fourth problem: that in order to make AI systems work effectively with humans, the designers and manufacturers have to control human attention and behaviour in a way that undermines human autonomy. Humans cannot be given free rein inside the box. They have to be guided, nudged, manipulated and possibly even coerced, to do the right thing. Explanations have to be packaged in a way that prevents the humans from undermining the accuracy, reliability and fairness of the overall system. This, of course, is not unusual. Workplaces are always designed with a view to controlling and incentivising behaviour, but AI enables a rapidly updating and highly dynamic form of behavioural control. The traditional human forms of resistance to outside control cannot easily cope with this new reality. This all then culminates in the fifth and final problem: the pervasive use of AI in the workplace (and society more generally) (v) undermines human agency. Instead of being the active captains of our fates; we become the passive recipients of technological benefits. This is a tragedy because we have built so much of our civilisation and sense of self-worth on the celebrations of agency. We are supposed to be agents of change, responsible to ourselves and to one another for what happens in the world around us. This is why we value the work we do and why we crave the illusion of control. What happens if agency can no longer be sustained? As per usual, I have left the solutions to the very end — to the point in the talk where they cannot be fully fleshed out and where I cannot be reasonably criticised for failing to do so — but it seems to me that we face two fundamental choices when it comes to addressing techno-superstition: (i) we can tinker with what’s presented to us inside the box, i.e. we can add more bells and whistles to our algorithms, more levers and switches. These will either give humans genuine understanding and control over the systems or the illusion of understanding and control. The problem with the former is that frequently involves tradeoffs or compromises to the system’s efficacy and the problem with the latter is that involves greater insults to the agency of the humans working inside the box. But there is an alternative: we can stop flapping our wings and get out of the box altogether. Leave the machines to do what they are best at while we do something else. Increasingly, I have come to think we should do the latter; that do so would acknowledge the truly liberating power of AI. This is the argument I develop further in my book Automation and Utopia. Thank you for your attention. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Assessing the Moral Status of Robots: A Shorter Defence of Ethical Behaviourism | File Type: audio/mpeg | Duration: Unknown

[This is the text of a lecture that I delivered at Tilburg University on the 24th of September 2019. It was delivered as part of the 25th Anniversary celebrations for TILT (Tilburg Institute for Law, Technology and Society). My friend and colleague Sven Nyholm was the discussant for the evening. The lecture is based on my longer academic article ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’ but was written from scratch and presents some key arguments in a snappier and clearer form. I also include a follow up section responding to criticisms from the audience on the evening of the lecture. My thanks to all those involved in organizing the event (Aviva de Groot, Merel Noorman and Silvia de Conca in particular). You can download an audio version of this lecture, minus the reflections and follow ups, here or listen to it above] 1. Introduction My lecture this evening will be about the conditions under which we should welcome robots into our moral communities. Whenever I talk about this, I am struck by how much my academic career has come to depend upon my misspent youth for its inspiration. Like many others, I was obsessed with science fiction as a child, and in particular with the representation of robots in science fiction. I had two favourite, fictional, robots. The first was R2D2 from the original Star Wars trilogy. The second was Commander Data from Star Trek: the Next Generation. I liked R2D2 because of his* personality - courageous, playful, disdainful of authority - and I liked Data because the writers of Star Trek used him as a vehicle for exploring some important philosophical questions about emotion, humour, and what it means to be human. In fact, I have to confess that Data has had an outsized influence on my philosophical imagination and has featured in several of my academic papers. Part of the reason for this was practical. When I grew up in Ireland we didn’t have many options to choose from when it came to TV. We had to make do with what was available and, as luck would have it, Star Trek: TNG was on every day when I came home from school. As a result, I must have watched each episode of its 7-season run multiple times. One episode in particular has always stayed with me. It was called ‘Measure of a Man’. In it, a scientist from the Federation visits the Enterprise because he wants to take Data back to his lab to study him. Data, you see, is a sophisticated human-like android, created by a lone scientific genius, under somewhat dubious conditions. The Federation scientist wants to take Data apart and see how he works with a view to building others like him. Data, unsurprisingly, objects. He argues that he is not just a machine or piece of property that can be traded and disassembled to suit the whims of human beings. He has his own, independent moral standing. He deserves to be treated with dignity. But how does Data prove his case? A trial ensues and evidence is given on both sides. The prosecution argue that Data is clearly just a piece of property. He was created not born. He doesn’t think or see the world like a normal human being (or, indeed, other alien species). He even has an ‘off switch’. Data counters by giving evidence of the rich relationships he has formed with his fellow crew members and eliciting testimony from others regarding his behaviour and the interactions they have with him. Ultimately, he wins the case. The court accepts that he has moral standing. Now, we can certainly lament the impact that science fiction has on the philosophical debate about robots. As David Gunkel observes in his 2018 book Robot Rights: “[S]cience fiction already — and well in advance of actual engineering practice — has established expectations for what a robot is or can be. Even before engineers have sought to develop working prototypes, writers, artists, and filmmakers have imagined what robots do or can do, what configurations they might take, and what problems they could produce for human individuals and communities.”  (Gunkel 2018, 16) He continues, noting that this is a “potential liability” because: “science fiction, it is argued, often produces unrealistic expectations for and irrational fears about robots that are not grounded in or informed by actual science.” (Gunkel 2018, 18) I certainly heed this warning. But, nevertheless, I think the approach taken by the TNG writers in the episode ‘Measure of a Man’ is fundamentally correct. Even if we cannot currently create a being like Data, and even if the speculation is well in advance of the science, they still give us the correct guide to resolving the philosophical question of when to welcome robots into our moral community. Or so, at least, I shall argue in the remainder of this lecture. 2. Tribalism and Conflict in Robot Ethics Before I get into my own argument, let me say something about the current lay of the land when it comes to this issue. Some of you might be familiar with the famous study by the social psychologist Muzafer Sherif. It was done in the early 1950s at a summer camp in Robber’s Cave, Oklahoma. Suffice to say, it is one of those studies that wouldn’t get ethics approval nowadays. Sherif and his colleagues were interested in tribalism and conflict. They wanted to see how easy it would be to get two groups of 11-year old boys to divide into separate tribes and go to war with one another. It turned out to be surprisingly easy. By arbitrarily separating the boys into two groups, giving them nominal group identity (the ‘Rattlers’ and the ‘Eagles’), and putting them into competition with each other, Sherif and his research assistants sowed the seeds for bitter and repeated conflict. The study has become a classic, repeatedly cited as evidence of how easy it is for humans to get trapped in intransigent group conflicts. I mention it here because, unfortunately, it seems to capture what has happened with the debate about the potential moral standing of robots. The disputants have settled into two tribes. There are those that are ‘anti’ the idea; and there are those that are ‘pro’ the idea. The members of these tribes sometimes get into heated arguments with one another, particularly on Twitter (which, admittedly, is a bit like a digital equivalent of Sherif’s summer camp). Those that are ‘anti’ the idea would include Noel Sharkey, Amanda Sharkey, Deborah Johnson, Aimee van Wynsberghe and the most recent lecturer in this series, Joanna Bryson. They cite a variety of reasons for their opposition. The Sharkeys, I suspect, think the whole debate is slightly ridiculous because current robots clearly lack the capacity for moral standing, and debating their moral standing distracts from the important issues in robot ethics - namely stopping the creation and use of robots that are harmful to human well-being. Deborah Johnson would argue that since robots can never experience pain or suffering they will never have moral standing. Van Wynsberghe and Bryson are maybe a little different and lean more heavily on the idea that even if it were possible to create robots with moral standing — a possibility that Bryson at least is willing to concede — it would be a very bad idea to do so because it would cause considerable moral and legal disruption. Those that are pro the idea would include Kate Darling, Mark Coeckelbergh, David Gunkel, Erica Neely, and Daniel Estrada. Again, they cite a variety of reasons for their views. Darling is probably the weakest on the pro side. She focuses on humans and thinks that even if robots themselves lack moral standing we should treat them as if they had moral standing because that would be better for us. Coeckelbergh and Gunkel are more provocative, arguing that in settling questions of moral standing we should focus less on the intrinsic capacities of robots and more on how we relate to them. If those relations are thick and meaningful, then perhaps we should accept that robots have moral standing. Erica Neely proceeds from a principle of moral precaution, arguing that even if we are unsure of the moral standing of robots we should err on the side of over-inclusivity rather than under-inclusivity when it comes to this issue: it is much worse to exclude a being with moral standing to include one without. Estrada is almost the polar opposite of Bryson, welcoming the moral and legal disruption that embracing robots would entail because it would loosen the stranglehold of humanism on our ethical code. To be clear, this is just a small sample of those who have expressed an opinion about this topic. There are many others that I just don’t have time to discuss. I should, however, say something here about this evening’s discussant, Sven and his views on the matter. I had the fortune of reading a manuscript of Sven’s forthcoming book Humans, Robots and Ethics. It is an excellent and entertaining contribution to the field of robot ethics and in it Sven shares his own views on the moral standing of robots. I’m sure he will explain them later on but, for the time being, I would tentatively place him somewhere near Kate Darling on this map: he thinks we should be open to the idea of treating robots as if they had moral standing, but not because of what the robots themselves are but because of what respecting them says about our attitudes to other humans. And what of myself? Where do I fit in all of this? People would probably classify me as belonging to the pro side. I have argued that we should be open to the idea that robots have moral standing. But I would much prefer to transcend this tribalistic approach to the issue. I am not advocate for the moral standing of robots. I think many of the concerns raised by those on the anti side are valid. Debating the moral standing of robots can seem, at times, ridiculous and a distraction from other important questions in robot ethics; and accepting them into our moral communities will, undoubtedly, lead to some legal and moral disruption (though I would add that not all disruption is a bad thing). That said, I do care about the principles we should use to decide questions of moral standing, and I think that those on the anti of the debate sometimes use bad arguments to support their views. This is why, in the remainder of this lecture, I will defend a particular approach to settling the question of the moral standing of robots. I do so in the hope that this can pave the way to a more fruitful and less tribalistic debate. In this sense, I am trying to return to what may be the true lesson of Sherif’s famous experiment on tribalism. In her fascinating book The Lost Boys: Inside Muzafer Sherif’s Robbers Cave Experiment, Gina Perry has revealed the hidden history behind Sherif’s work. It turns out that Sherif tried to conduct the exact same experiment as he did in Robber’s Cave one year before in Middle Grove, New York. It didn’t work out. No matter what the experimenters did to encourage conflict, the boys refused to get sucked into it. Why was this? One suggestion is that at Middle Grove, Sherif didn’t sort the boys into two arbitrary groups as soon as they arrived. They were given the chance to mingle and get to know one another before being segregated. This initial intermingling may have inoculated them from tribalism. Perhaps we can do the same thing with philosophical dialogue? I live in hope. 3. In Defence of Ethical Behaviourism The position I wish to defend is something I call ‘ethical behaviourism’. According to this view, the behavioural representations of another entity toward you are a sufficient ground for determining their moral status. Or, to put it slightly differently, how an entity looks and acts is enough to determine its moral status. If it looks and acts like a duck, then you should probably treat it like you treat any other duck. Ethical behaviourism works through comparisons. If you are unsure of the moral status of a particular entity — for present purposes this will be a robot but it should be noted that ethical behaviourism has broader implications — then you should compare its behaviours to that of another entity that is already agreed to have moral status — a human or an animal. If the robot is roughly performatively equivalent to that other entity, then it too has moral status. I say “roughly” since no two entities are ever perfectly equivalent. If you compared two adult human beings you would spot performative differences between them, but this wouldn’t mean that one of them lacks moral standing as a result. The equivalence test is an inexact one, not an exact one. There is nothing novel in ethical behaviourism. It is, in effect, just a moral variation of the famous Turing Test for machine intelligence. Where Turing argued that we should assess intelligence on the basis of behaviour, I am arguing that we should determine moral standing on the basis of behaviour. It is also not a view that is original to me. Others have defended similar views, even if they haven’t explicitly labelled it as such. Despite the lack of novelty, ethical behaviourism is easily misunderstood and frequently derided. So let me just clarify a couple of points. First, note that it is a practical and epistemic thesis about how we can settle questions of moral standing; it is not an abstract metaphysical thesis about what it is that grounds moral standing. So, for example, someone could argue that the capacity to feel pain is the metaphysical grounding for moral status and that this capacity depends on having a certain mental apparatus. The ethical behaviourist can agree with this. They will just argue that the best evidence we have for determining whether an entity has the capacity to feel pain is behavioural. Furthermore, ethical behaviourism is agnostic about the broader consequences of its comparative tests. To say that one entity should have the same moral standing as another entity does not mean both are entitled to a full set of legal and moral rights. That depends on other considerations. A goat could have moral standing, but that doesn’t mean it has the right to own property. This is important because when I am arguing that we should apply this approach to robots and I am not thereby endorsing a broader claim that we should grant robots legal rights or treat them like adult human beings. This depends on who or what the robots is being compared to. So what’s the argument for ethical behaviourism? I have offered different formulations of this but for this evening’s lecture I suggest that it consists of three key propositions or premises. (P1) The most popular criteria for moral status are dependent on mental states or capacities, e.g. theories focused on sentience, consciousness, having interests, agency, and personhood. (P2) The best evidence — and oftentimes the only practicable evidence — for the satisfaction of these criteria is behavioural. (P3) Alternative alleged grounds of moral status or criteria for determining moral status either fail to trump or dislodge the sufficiency of the behavioural evidence. Therefore, ethical behaviourism is correct: behaviour provides a sufficient basis for settling questions of moral status. I take it that the first premise of this argument is uncontroversial. Even if you think there are other grounds for moral status, I suspect you agree that an entity with sentience or consciousness (etc) has some kind of moral standing. The second premise is more controversial but is, I think, undeniable. It’s a trite observation but I will make it anyway: We don’t have direct access to one another’s minds. I cannot crawl inside your head and see if you really are experiencing pain or suffering. The only thing I have to go on is how you behave and react to the world. This is true, by the way, even if I can scan your brain and see whether the pain-perceiving part of it lights up. This is because the only basis we have for verifying the correlations between functional activity in the brain and mental states is behavioural. What I mean is that scientists ultimately verify those correlations by asking people in the brain scanners what they are feeling. So all premise (2) is saying is that if the most popular theories of moral status are to work in practice, it can only be because we use behavioural evidence to guide their application. That brings us to premise (3): that all other criteria fail to dislodge the importance of behavioural evidence. This is the most controversial one. Many people seem to passionately believe that there are other ways of determining moral status and indeed they argue that relying on behavioural evidence would be absurd. Consider these two recent Twitter comments on an article I wrote about ethical behaviourism and how it relates to animals and robots: First comment: “[This is] Errant #behaviorist #materialist nonsense…Robots are inanimate even if they imitate animal behavior. They don’t want or care about anything. But knock yourself out. Put your toaster in jail if it burns your toast.” Second comment: “If I give a hammer a friendly face so some people feel emotionally attached to it, it still remains a tool #AnthropomorphicFallacy” These are strong statements, but they are not unusual. I encounter this kind of criticism quite frequently. But why? Why are people so resistant to ethical behaviourism? Why do they think that there must be something more to how we determine moral status? Let’s consider some of the most popular objections. 4. Objections and Replies In a recent paper, I suggested that there were seven (more, depending on how you count) major objections to ethical behaviourism. I won’t review all seven here, but I will consider four of the most popular ones. Each of these objections should be understood as an attempt to argue that behavioural evidence by itself cannot suffice for determining moral standing. Other evidence matters as well and can ‘defeat’ the behavioural evidence. (A) The Material Cause Objection The first objection is that the ontology of an entity makes a difference to its moral standing. To adopt the Aristotelian language, we can say that the material cause of an entity (i.e. what it is made up of) matters more than behaviour when it comes to moral standing. So, for example, someone could argue that robots lack moral standing because they are not biological creatures. They are not made from the same ‘wet’ organic components as human beings or animals. Even if they are performatively equivalent to human beings or animals, this ontological difference scuppers any claim they might have to moral standing. I find this objection unpersuasive. It smacks to me of biological mysterianism. Why exactly does being made of particular organic material make such a crucial difference? Imagine if your spouse, the person you live with everyday, was suddenly revealed to be an alien from the Andromeda galaxy. Scientists conduct careful tests and determine that they are not a carbon-based lifeform. They are made from something different, perhaps silicon. Despite this, they still look and act in the same way as they always have (albeit now with some explaining to do). Would the fact that they are made of different stuff mean that they no longer warrant any moral standing in your eyes? Surely not. Surely the behavioural evidence suggesting that they still care about you and still have the mental capacities you used to associate with moral standing would trump the new evidence you have regarding their ontology. I know non-philosophers dislike thought experiments of this sort, finding them to be slightly ridiculous and far-fetched. Nevertheless, I do think they are vital in this context because they suggest that behaviour does all the heavy lifting when it comes to assessing moral standing. In other words, behaviour matters more than matter. This is also, incidentally, one reason why it is wrong to say that ethical behaviourism is a ‘materialist’ view: ethical behaviourism is actually agnostic regarding the ontological instantiation of the capacities that ground moral status; it is concerned only with the evidence that is sufficient for determining their presence. All that said, I am willing to make one major concession to the material cause objection. I will concede that ontology might provide an alternative, independent ground for determining the moral status of an entity. Thus, we might accept that an entity that is made from the right biological stuff has moral standing, even if they lack the behavioural sophistication we usually require for moral standing. So, for example someone in a permanent coma might have moral standing because of what they are made of, and not because of what they can do. Still, all this shows is that being made of the right stuff is an independent sufficient ground for moral standing, not that it is a necessary ground for moral standing. The latter is what would need to be proved to undermine ethical behaviourism. (B) The Efficient Cause Objection The second objection is that how an entity comes into existence makes a difference to its moral standing. To continue the Aristotelian theme, we can say that the efficient cause of existence is more important than the unfolding reality. This is an objection that the philosopher Michael Hauskeller hints at in his work. Hauskeller doesn’t focus on moral standing per se, but does focus on when we can be confident that another entity cares for us or loves us. He concedes that behaviour seems like the most important thing when addressing this issue — what else could caring be apart from caring behaviour? — but then resiles from this by arguing that how the being came into existence can undercut the behavioural evidence. So, for example, a robot might act as if it cares about you, but when you learn that the robot was created and manufactured by a team of humans to act as if it cares for you, then you have reason to doubt the sincerity of its behaviour. It could be that what Hauskeller is getting at here is that behavioural evidence can often be deceptive and misleading. If so, I will deal with this concern in a moment. But it could also be that he thinks that the mere fact that a robot was programmed and manufactured, as opposed to being evolved and developed, makes a crucial difference to moral standing. If that is what he is claiming, then it is hard to see why we should take it seriously. Again, imagine if your spouse told you that they were not conceived and raised in the normal way. They were genetically engineered in a lab and then carefully trained and educated. Having learned this, would you take a new view of their moral standing? Surely not. Surely, once again, how they actually behave towards you — and not how they came into existence — would be what ultimately mattered. We didn’t deny the first in vitro baby moral standing simply because she came into existence in a different way from ordinary human beings. The same principle should apply to robots. Furthermore, if this is what Hauskeller is arguing, it would provide us with an unstable basis on which to make crucial judgments of moral standing. After all, the differences between humans and robots with respect to their efficient causes is starting to breakdown. Increasingly, robots are not being programmed and manufactured from the top-down to follow specific rules. They are instead given learning algorithms and then trained on different datasets with the process sometimes being explicitly modeled on evolution and childhood development. Similarly, humans are increasingly being designed and programmed from the top down, through artificial reproduction, embryo selection and, soon, genetic engineering. You may object to all this tinkering with the natural processes of human development and conception. But I think you would be hard pressed to deny a human that came into existence as a result of these process the moral standing you ordinarily give to other human beings. (C) The Final Cause Objection The third objection is that the purposes an entity serves and how it is expected to fulfil those purposes makes a difference to its moral standing. This is an objection that Joanna Bryson favours in her work. In several papers, she has argued that because robots will be designed to fulfil certain purposes on our behalf (i.e. they will be designed to serve us) and because they will be owned and controlled by us in the process, they should not have moral standing. Now, to be fair, Bryson is more open to the possibility of robot moral standing than most. She has said, on several occasions, that it is possible to create robots that have moral standing. She just thinks that that this should not happen, in part because they will be owned and controlled by us, and because they will be (and perhaps should be) designed to serve our ends. I don’t think there is anything in this that dislodges or upsets ethical behaviourism. For one thing, I find it hard to believe that the fact that an entity has been designed to fulfil a certain purpose should make a crucial difference to its moral standing. Suppose, in the future, human parents can genetically engineer their offspring to fulfil certain specific ends. For example, they can select genes that will guarantee (with the right training regime) that their child will be a successful athlete (this is actually not that dissimilar to what some parents try to do nowadays). Suppose they succeed. Would this fact alone undermine the child’s claim to moral standing? Surely not, and surely the same standard should apply to a robot. If it is performatively equivalent to another entity with moral standing, then the mere fact that it has been designed to fulfil a specific purpose should not affect its moral standing. Related to this, it is hard to see why the fact that we might own and control robots should make a critical difference to their moral standing. If anything, this inverts the proper order of moral justification. The fact that a robot looks and acts like another entity that we believe to have moral standing should cause us to question our approach to ownership and control, not vice versa. We once thought it was okay for humans to own and control other humans. We were wrong to think this because it ignored the moral standing of those other humans. That said, there are nuances here. Many people think that animals have some moral standing (i.e. that we need to respect their welfare and well-being) but that it is not wrong to own them or attempt to control them. The same approach might apply to robots if they are being compared to animals. This is the crucial point about ethical behaviourism: the ethical consequences of accepting that a robot is performatively equivalent to another entity with moral standing depends, crucially, on who or what that other entity is. (D) The Deception Objection The fourth objection is that ethical behaviourism cannot work because it is too easy to be deceived by behavioural cues. A robot might look and act like it is in pain, but this could just be a clever trick, used by its manufacturer, to foster false sympathy. This is, probably, the most important criticism of ethical behaviourism. It is what I think lurks behind the claim that ethical behaviourism is absurd and must be resisted. It is well-known that humans have a tendency toward hasty anthropomorphism. That is, we tend to ascribe human-like qualities to features of our environment without proper justification. We anthropomorphise the weather, our computers, the trees and the plants, and so forth. It is easy to ‘hack’ this tendency toward hasty anthropomorphism. As social roboticists know, putting a pair of eyes on a robot can completely change how a human interacts with it, even if the robot cannot see anything. People worry, consequently, that ethical behaviourism is easily exploited by nefarious technology companies. I sympathise with the fear that motivates this objection. It is definitely true that behaviour can be misleading or deceptive. We are often misled by the behaviour of our fellow humans. To quote Shakespeare, someone can ‘smile and smile and be a villain’. But what is the significance of this fact when it comes to assessing moral status? To me, the significance is that it means we should be very careful when assessing the behavioural evidence that is used to support a claim about moral status. We shouldn’t extrapolate too quickly from one behaviour. If a robot looks and acts like it is in pain (say) that might provide some warrant for thinking it has moral status, but we should examine its behavioural repertoire in more detail. It might emerge that other behaviours are inconsistent with the hypothesis that it feels pain or suffering. The point here, however, is that we are always using other behavioural evidence to determine whether the initial behavioural evidence was deceptive or misleading. We are not relying on some other kind of information. Thus, for example, I think it would be a mistake to conclude that a robot cannot feel pain, even though it performs as if it does, because the manufacturer of the robot tells us that it was programmed to do this, or because some computer engineer can point to some lines of code that are responsible for the pain performance. That evidence by itself — in the absence of other countervailing behavioural evidence — cannot undermine the behavioural evidence suggesting that the robot does feel pain. Think about it like this: imagine if a biologist came to you and told you that evolution had programmed the pain response into humans in order to elicit sympathy from fellow humans. What’s more, imagine if a neuroscientist came to you and and told you she could point to the exact circuit in the brain that is responsible for the human pain performance (and maybe even intervene in and disrupt it). What they say may well be true, but it wouldn’t mean that the behavioural evidence suggesting that your fellow humans are in pain can be ignored. This last point is really the crucial bit. This is what is most distinctive about the perspective of ethical behaviourism. The tendency to misunderstand it, ignore it, or skirt around it, is why I think many people on the ‘anti’ side of the debate make bad arguments. 5. Implications and Conclusions That’s all I will say in defence of ethical behaviourism this evening. Let me conclude by addressing some of its implications and heading off some potential misunderstandings. First, let me re-emphasise that ethical behaviourism is about the principles we should apply when assessing the moral standing of robots. In defending it, I am not claiming that robots currently have moral standing or, indeed, that they will ever have moral standing. I think this is possible, indeed probable, but I could be wrong. The devil is going to be in the detail of the behavioural tests we apply (just as it is with the Turing test for intelligence). Second, there is nothing in ethical behaviourism that suggests that we ought to create robots that cross the performative threshold to moral standing. It could be, as people like Bryson and Van Wysnberghe argue, that this is a very bad idea: that it will be too disruptive of existing moral and legal norms. What ethical behaviourism does suggest, however, is that there is an ethical weight to the decision to create human-like and animal-like robots that may be underappreciated by robot manufacturers. Third, acknowledging the potential risks, there are also potential benefits to creating robots that cross the performative threshold. Ethical behaviourism can help to reveal a value to relationships with robots that is otherwise hidden. If I am right, then robots can be genuine objects of moral affection, friendship and love, under the right conditions. In other words, just as there are ethical risks to creating human-like and animal-like robots, there are also ethical rewards and these tend to be ignored, ridiculed or sidelined in the current debate. Fourth, and related to this previous point, the performative threshold that robots have to cross in order to unlock the different kinds of value might vary quite a bit. The performative threshold needed to attain basic moral standing might be quite low; the performative threshold needed to say that a robot can be a friend or a partner might be substantially higher. A robot might have to do relatively little to convince us that it should be treated with moral consideration, but it might have to do a lot to convince us that it is our friend. These are topics that I have explored in greater detail in some of my papers, but they are also topics that Sven has explored at considerable length. Indeed, several chapters of his forthcoming book are dedicated to them. So, on that note, it is probably time for me to shut up and hand over to him and see what he has to say about all of this. Reflections and Follow Ups After I delivered the above lecture, my colleague and friend Sven Nyholm gave a response and there were some questions and challenges from the audience. I cannot remember every question that was raised, but I thought I would respond to a few that I can remember. 1. The Randomisation Counterexample One audience member (it was Nathan Wildman) presented an interesting counterexample to my claim that other kinds of evidence don’t defeat or undermine the behavioural evidence for moral status. He argued that we could cook-up a possible scenario in which our knowledge of the origins of certain behaviours did cause us to question whether it was sufficient for moral status. He gave the example of a chatbot that was programmed using a randomisation technique. The chatbot would generate text at random (perhaps based on some source dataset). Most of the time the text is gobbledygook but on maybe one occasion it just happens to have a perfectly intelligible conversation with you. In other words, whatever is churned out by the randomisation algorithm happens to perfectly coincide with what would be intelligible in that context (like picking up a meaningful book in Borges’s Library of Babel). This might initially cause you to think it has some significant moral status, but if the computer programmer came along and told you about the randomisation process underlying the programming you would surely change your opinion. So, on this occasion, it looks like information about the causal origins of the behaviour, makes a difference to moral status. Response: This is a clever counterexample but I think it overlooks two critical points. First, it overlooks the point I make about avoiding hasty anthropomorphisation towards the end of my lecture. I think we shouldn’t extrapolate too much from just one interaction with a robot. We should conduct a more thorough investigation of the robot’s (or in this case the chatbot’s) behaviours. If the intelligible conversation was just a one-off, then we will quickly be disabused of our belief that it has moral status. But if it turns out that the intelligible conversation was not a one-off, then I don’t think the evidence regarding the randomisation process would have any such effect. The computer programmer could shout and scream as much as he/she likes about the randomisation algorithm, but I don’t think this would suffice to undermine the consistent behavioural evidence. This links to a second, and perhaps deeper metaphysical point I would like to make: we don’t really know what the true material instantiation of the mind is (if it is indeed material). We think the brain and its functional activity is pretty important, but we will probably never have a fully satisfactory theory of the relationship between matter and mind. This is the core of the hard problem of consciousness. Given this, it doesn’t seem wise or appropriate to discount the moral status of this hypothetical robot just because it is built on a randomisation algorithm. Indeed, if such a robot existed, it might give us reason to think that randomisation was one of the ways in which a mind could be functionally instantiated in the real world. I should say that this response ignores the role of moral precaution in assessing moral standing. If you add a principle of moral precaution to the mix, then it may be wrong to favour a more thorough behavioural test. This is something I discuss a bit in my article on ethical behaviourism. 2. The Argument confuses how we know X is valuable with what makes X actually valuable One point that Sven stressed in his response, and which he makes elsewhere too, is that my argument elides or confuses two separate things: (i) how we know whether something is of value and (ii) what it is that makes it valuable. Another way of putting it: I provide a decision-procedure for deciding who or what has moral status but I don’t thereby specify what it is that makes them have moral status. It could be that the capacity to feel pain is what makes someone have moral standing and that we know someone feels pain through their behaviour, but this doesn’t mean that they have moral standing because of their behaviour. Response: This is probably a fair point. I may on occasion elide these two things. But my feeling is that this is a ‘feature’ rather than a ‘bug’ in my account. I’m concerned with how we practically assess and apply principles of moral standing in the real world, and not so much with what it is that metaphysically undergirds moral standing. 3. Proxies for Behaviour versus Proxies for Mind Another comment (and I apologise for not remembering who gave it) is that on my theory behaviour is important but only because it is a proxy for something else, namely some set of mental states or capacities. This is similar to the point Sven is making in his criticism. If that’s right, then I am wrong to assume that behaviour is the only (or indeed the most important) proxy for mental states. Other kinds of evidence serve as proxies for mental states. The example was given of legal trials where the prosecution is trying to prove what the mental status of the defendant was at the time of an offence. They don’t just rely on behavioural evidence. They also rely on other kinds of forensic evidence to establish this. Response: I don’t think this is true and this gets to a deep feature of my theory. To take the criminal trial example, I don’t think it is true to say that we use other kinds of evidence as proxies for mental states. I think we use them as proxies for behaviour which we then use as proxies for mental states. In other words, the actual order of inference goes: Other evidence → behaviour → mental state And not: Other evidence → mental state This is the point I was getting at in my talk when I spoke about how we make inferences from functional brain activity to mental state. I believe what happens when we draw a link between brain activity and mental state, what we are really doing is this: Brain state → behaviour → mental state And not Brain state → mental state. Now, it is, of course, true to say that sometimes scientists think we can make this second kind of inference. For example, purveyors of brain based lie detection tests (and, indeed, other kinds of lie detection test) try to draw a direct line of inference from a brain state to a mental state, but I would argue that this is only because they have previously verified their testing protocol by following the “brain state → behaviour → mental state” route and confirming that it is reliable across multiple tests. This gives them the confidence to drop the middle step on some occasions, but ultimately this is all warranted (if it is, in fact, warranted – brain-based lie detection is controversial) because the scientists first took the behavioural step. To undermine my view, you would have to show that it is possible to cut out the behavioural step in this inference pattern. I don’t think this can be done, but perhaps I can be proved wrong. This is perhaps the most metaphysical aspect of my view. 4. Default Settings and Practicalities Another point that came up in conversation with Sven, Merel Noorman and Silvia de Conca, had to do with the default assumptions we are likely to have when dealing with robots and how this impacts on the practicalities of robots being accepting into the moral circle. In other words, even if I am right in some abstract, philosophical sense, will anyone actually follow the behavioural test I advocate? Won’t there be a lot of resistance to it in reality? Now, as I mentioned in my lecture, I am not an activist for robot rights or anything of the sort. I am interested in the general principles we should apply when settling questions of moral status; not with whether a particular being, such as a robot, has acquired moral status. That said, implicit views about the practicalities of applying the ethical behaviourist test may play an important role in some of the arguments I am making. One example of this has to do with the ‘default’ assumption we have when interpreting the behaviour of humans/animals vis-à-vis robots. We tend to approach humans and animals with an attitude of good faith, i.e. we assume their each of their outward behaviours is a sincere representation of their inner state of mind. It’s only if we receive contrary evidence that we will start to doubt the sincerity of the behaviour. But what default assumption do we have when confronting robots? It seems plausible to suggest that most people will approach them with an attitude of bad faith. They will assume that their behaviours are representative of nothing at all and will need a lot of evidence to convince them that they should be granted some weight. This suggests that (a) not all behavioural evidence is counted equally and (b) it might be very difficult, in practice, for robots to be accepted into the moral circle. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Response: I don’t see this as a criticism of ethical behaviourism but, rather, a warning to anyone who wishes to promote it. In other words, I accept that people will resist ethical behaviourism and may treat robots with greater suspicion than human or animal agents. One of the key points of this lecture and the longer academic article I wrote about the topic was to address this suspicion and skepticism. Nevertheless, the fact that there may be these practical difficulties does not mean that ethical behaviourism is incorrect. In this respect, it is worth noting that Turing was acutely aware of this problem when he originally formulated his 'Imitation Game' test. The reason why the test was purely text-based in its original form was to prevent human-centric biases affecting its operation. 5. Ethical Mechanicism vs Ethical Behaviourism  After I posted this article, Natesh Ganesh posted a critique of my handling of the deception objection on Twitter. He made two interesting points. First, he argued that the thought experiment I used to dismiss the deception objection was misleading and circular. If a scientist revealed the mechanisms underlying my own pain performances I would have no reason to doubt that the pain was genuine since I already know that someone with my kind of neural circuitry can experience pain. If they revealed the mechanisms underlying a robot’s pain performances things would be different because I do not yet have a reason to think that a being with that kind of mechanism can experience genuine pain. As a result, the thought experiment is circular because only somebody who already accepted ethical behaviourism would be so dismissive of the mechanistic evidence. Here’s how Natesh expresses the point: “the analogy in the last part [the response to the deception objection] seems flawed. Showing me the mechanisms of pain in entities (like humans) who we share similar mechanisms with & agree have moral standing is different from showing me the mechanisms of entities (like robots) whose moral standing we are trying to determine. Denying experience of pain in the 1st simply because I now know the circuitry would imply denying your own pain & hence moral standing. But accepting/ denying the 2nd if its a piece of code implicitly depends on whether you already accept/deny ethical behaviorism. It is just circular to appeal to that example as evidence.” He then follows up with a second point (implicit in what was just said) about the importance of mechanical similarities between entities when it comes to assessing moral standing: “I for one am more likely to [believe] a robot can experience pain if it shows the behavior & the manufacturer opened it up & showed me the circuitry and if that was similar to my own (different material perhaps) I am more likely to accept the robot experiences pain. In this case once again I needed machinery on top of behavior.” What I would say here, is that Natesh, although not completely dismissive of the importance of behaviour to assessing moral standing, is a fan of ethical mechanicism, and not ethical behaviourism. He thinks you must have mechanical similarity (equivalence?) before you can conclude that two entities share moral standing. Response: On the charge of circularity, I don’t think this is quite fair. The thought experiment I propose when responding to the deception objection is, like all thought experiments, intended to be an intuition pump. The goal is to imagine a situation in which you could describe and intervene in the mechanical underpinning of a pain performance with great precision (be it a human pain performance or otherwise) and ask whether the mere fact that you could describe the mechanism in detail or intervene in it would be make a difference to the entity’s moral standing. My intuitions suggest it wouldn’t make a difference, irrespective of the details of the mechanism (this is the point I make, above, in relation to the example given by Nathan Wildman about the robot whose behaviour is the result of a random-number generator programme). Perhaps other people’s intuitions are pumped in a different direction. That can happen but it doesn’t mean the thought experiment is circular. What about the importance of mechanisms in addition to behaviour? This is something I address in more detail in the academic paper. I have two thoughts about it. First, I could just bite the bullet and agree that the underlying mechanisms must be similar too. This would just add an additional similarity test to the assessment of moral status. There would then be similar questions as to how similar the mechanisms must be. Is it enough if they are, roughly, functionally similar or must they have the exact same sub-components and processes? If the former, then it still seems possible in principle for roboticists to create a functionally similar underlying mechanism and this could then ground moral standing for robots. Second, despite this, I would still push back against the claim that similar underlying mechanisms are necessary. This strikes me as being just a conservative prejudgment rather than a good reason for denying moral status to behaviourally equivalent entities. Why are we so confident that only entities with our neurological mechanisms (or something very similar) can experience pain (or instantiate the other mental properties relevant to moral standing)? Or, to put it less controversially, why should we be so confident that mechanical similarity undercuts behavioural similarity? If there is an entity that looks and acts like it is in pain (or has interests, a sense of personhood, agency etc), and all the behavioural tests confirm this, then why deny it moral standing because of some mechanical differences? Part of the resistance here could be that people are confusing two different claims: Claim 1: it is impossible (physically, metaphysically) for an entity that lacks sufficient mechanical similarity (with humans/animals) to have the behavioural sophistication we associate with experiencing pain, having agency etc.Claim 2: an entity that has the behavioural sophistication we associate with experiencing pain, having agency (etc) but then lacks mechanical similarity to other entities with such behavioural sophistication, should be denied moral standing because they lack mechanical similarity. Ethical behaviourism denies claim 2, but it does not, necessarily, deny claim 1. It could be the case that mechanical similarity is essential for behavioural similarity. This is something that can only be determined after conducting the requisite behavioural tests. The point, as always throughout my defence of the position, is that the behavioural evidence should be our guide. This doesn’t mean that other kinds of evidence are irrelevant but simply that they do not carry as much weight. My sense is that people who favour ethical mechanicism have a very strong intuition in favour of claim 1, which they then carry over into support for claim 2. This carry over is not justified as the two claims are not logically equivalent. Subscribe to the newsletter

 #64 - Munthe on the Precautionary Principle and Existential Risk | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science & technology, the environment and society. He is probably best-known for his work on the precautionary principle and its uses in ethical and policy debates. This was the central topic of his 2011 book The Price of Precaution and the Ethics of Risk. We talk about the problems with the practical application of the precautionary principle and how they apply to the debate about existential risk. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:35 - What is the precautionary principle? Where did it come from?6:08 - The key elements of the precautionary principle9:35 - Precaution vs. Cost Benefit Analysis15:40 - The Problem of the Knowledge Gap in Existential Risk21:52 - How do we fill the knowledge gap?27:04 - Why can't we fill the knowledge gap in the existential risk debate?30:12 - Understanding the Black Hole Challenge35:22 - Is it a black hole or total decisional paralysis?39:14 - Why does precautionary reasoning have a 'price'?44:18 - Can we develop a normative theory of precautionary reasoning? Is there such a thing as a morally good precautionary reasoner?52:20 - Are there important practical limits to precautionary reasoning?1:01:38 - Existential risk and the conservation of value  Relevant LinksChristian's Academic HomepageChristian's Twitter account"The Black Hole Challenge: Precaution, Existential Risks and the Problem of Knowledge Gaps" by ChristianThe Price of Precaution and the Ethics of Risk by ChristianHans Jonas's The Imperative of ResponsibilityThe Precautionary Approach from the Rio DeclarationEpisode 62 with Olle Häggström #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 #63 - Reagle on the Ethics of Life Hacking | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the social implications of digital technology. Our conversation focuses on his most recent book: Hacking Life: Systematized Living and its Discontents (MIT Press 2019). You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:52 - What is life-hacking? The four features of life-hacking4:20 - Life Hacking as Self Help for the 21st Century7:00 - How does technology facilitate life hacking?12:12 - How can we hack time?20:00 - How can we hack motivation?27:00 - How can we hack our relationships?31:00 - The Problem with Pick-Up Artists34:10 - Hacking Health and Meaning39:12 - The epistemic problems of self-experimentation49:05 - The dangers of metric fixation54:20 - The social impact of life-hacking57:35 - Is life hacking too individualistic? Should we focus more on systemic problems?1:03:15 - Does life hacking encourage a less intuitive and less authentic mode of living?1:08:40 - Conclusion (with some further thoughts on inequality)  Relevant LinksJoseph's HomepageJoseph's BlogHacking Life: Systematized Living and Its Discontents (including open access HTML version)The Lifehacker WebsiteThe Quantified Self WebsiteSeth Roberts' first and final column: Butter Makes me SmarterThe Couple that Pays Each Other to Put the Kids to Bed (story about the founders of the Beeminder App)'The Quantified Relationship' by Danaher, Nyholm and EarpEpisode 6 - The Quantified Self with Deborah Lupton #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 #62 - Häggström on AI Motivations and Risk Denialism | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years he has broadened his research interests to focus applied statistics, philosophy, climate science, artificial intelligence and social consequences of future technologies. He is the author of Here be Dragons: Science, Technology and the Future of Humanity (OUP 2016). We talk about AI motivations, specifically the Omohundro-Bostrom theory of AI motivation and its weaknesses. We also discuss AI risk denialism. You can download the episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:02 - Do we need to define AI?4:15 - The Omohundro-Bostrom theory of AI motivation7:46 - Key concepts in the Omohundro-Bostrom Theory: Final Goals vs Instrumental Goals10:50 - The Orthogonality Thesis14:47 - The Instrumental Convergence Thesis20:16 - Resource Acquisition as an Instrumental Goal22:02 - The importance of goal-content integrity25:42 - Deception as an Instrumental Goal29:17 - How the doomsaying argument works31:46 - Critiquing the theory: the problem of self-referential final goals36:20 - The problem of incoherent goals42:44 - Does the truth of moral realism undermine the orthogonality thesis?50:50 - Problems with the distinction between instrumental goals and final goals57:52 - Why do some people deny the problem of AI risk?1:04:10 - Strong versus Weak AI Scepticism1:09:00 - Is it difficult to be taken seriously on this topic?   Relevant LinksOlle's Blog Olle's webpage at Chalmers University'Challenges to the Omohundro-Bostrom framework for AI Motivations' by Olle (highly recommended)'The Superintelligent Will' by Nick Bostrom'The Basic AI Drives' by Stephen OmohundroOlle Häggström: Science, Technology, and the Future of Humanity (video)Olle Häggström and Thore Husveldt debate AI Risk (video)Summary of Bostrom's theory (by me)'Why AI doomsayers are like sceptical theists and why it matters' by me   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 #61 - Yampolskiy on Machine Consciousness and AI Welfare | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare. You can listen below or download here. You can also subscribe to the podcast on Apple, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:30 - Artificial minds versus Artificial Intelligence6:35 - Why talk about machine consciousness now when it seems far-fetched?8:55 - What is phenomenal consciousness?11:04 - Illusions as an insight into phenomenal consciousness18:22 - How to create an illusion-based test for machine consciousness23:58 - Challenges with operationalising the test31:42 - Does AI already have a minimal form of consciousness?34:08 - Objections to the proposed test and next steps37:12 - Towards a science of AI welfare40:30 - How do we currently test for animal and human welfare44:10 - Dealing with the problem of deception47:00 - How could we test for welfare in AI?52:39 - If an AI can suffer, do we have a duty not to create it?56:48 - Do people take these ideas seriously in computer science?58:08 - What next? Relevant LinksRoman's homepage'Detecting Qualia in Natural and Artificial Agents' by Roman'Towards AI Welfare Science and Policies' by Soenke Ziesche and Roman YampolskiyThe Hard Problem of Consciousness25 famous optical illusionsCould AI get depressed and have hallucinations? #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Comments

Login or signup comment.