Philosophical Disquisitions show

Philosophical Disquisitions

Summary: Interviews with experts about the philosophy of the future.

Podcasts:

 Episode #46 - Minerva on the Ethics of Cryonics | File Type: audio/mpeg | Duration: Unknown

 In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal of Medical Ethics, Bioethics, Cambridge Quarterly Review of Ethicsand the Hastings Centre Report. We talk about life, death and the wisdom and ethics of cryonics. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes:0:00 - Introduction1:34 - What is cryonics anyway?6:54 - The tricky logistics of cryonics: you need to die in the right way10:30 - Is cryonics too weird/absurd to take seriously? Analogies with IVF and frozen embryos16:04 - The opportunity cost of cryonics18:18 - Is death bad? Why?22:51 - Is life worth living at all? Is it better never to have been born?24:44 - What happens when live is no longer worth living? The attraction of cryothanasia30:28 - Should we want to live forever? Existential tiredness and existential boredom37:20 - Is immortality irrelevant to the debate about cryonics?41:42 - Even if cryonics is good for me might it be the unethical choice?45:00 (ish) - Egalitarianism and the distribution of life years49:39 - Would future generations want to revive us?52:34 - Would we feel out of place in the distant future?Relevant LinksFrancesca's webpageThe Ethics of Cryonics: Is it immoral to be immortal? by Francesca'Cryopreservation of Embryos and Fetuses as a Future Option for Family Planning Purposes' by Francesca and Anders Sandberg'Euthanasia and Cryothanasia' by Francesca and Anders Sandberg'The Badness of Death and the Meaning of Life' (Series) - pretty much everything I've ever written about the philosophy of life and deathAlcor Life Extension FoundationCryonics InstituteTo be a Machine by Mark O'Connell  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #49 - Maas on AI and the Future of International Law | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Matthijs Maas. Matthijs is a doctoral researcher at the University of Copenhagen's 'AI and Legal Disruption' research unit, and a research affiliate with the Governance of AI Program at Oxford University's Future of Humanity Institute. His research focuses on safe and beneficial global governance strategies for emerging, transformative AI systems. This involves, in part, a study of the requirements and pitfalls of international regimes for technology arms control, non-proliferation and the conditions under which these are legitimate and effective. We talk about the phenomenon of 'globally disruptive AI' and the effect it will have on the international legal order. You can download the episode here or listen below. You can also subscribe via iTunes or Stitcher (the RSS feed is here).   Show Notes0:00 - Introduction2:11 - International Law 1016:38 - How technology has repeatedly shaped the content of international law10:43 - The phenomenon of 'globally disruptive artificial intelligence' (GDAI)15:20 - GDAI and the development of international law18:05 - Will we need new laws?19:50 - Will GDAI result in lots of legal uncertainty?21:57 - Will the law be under/over-inclusive of GDAI?25:21 - Will GDAI render international law obsolete?31:00 - Could we have a tech-neutral international law?34:10 - Could we automate the monitoring and enforcement of international law?44:35 - Could we replace international legal institutions with technological systems of management?47:35 - Could GDAI lead to the end of the international legal order?57:23 - Could GDAI result in more isolationism and less multi-lateralism1:00:40 - So what will the future be?  Relevant LinksFollow Matthijs on TwitterArtificial Intelligence and Legal Disruption research group (University of Copenhagen)Governance of AI Program (University of Oxford)Dafoe, Allan. “AI Governance: A Research Agenda.” Oxford: Governance of AI Program, Future of Humanity Institute, 2018.On history of technology and international law: Picker, Colin B. “A View from 40,000 Feet: International Law and the Invisible Hand of Technology.” Cardozo Law Review 23 (2001): 151–219.Brownsword, Roger. “In the Year 2061: From Law to Technological Management.” Law, Innovation and Technology 7, no. 1 (January 2, 2015): 1–51.Boutin, Berenice. “Technologies for International Law & International Law for Technologies.” Groningen Journal of International Law (blog), October 22, 2018.Moses, Lyria Bennett. “Recurring Dilemmas: The Law’s Race to Keep Up With Technological Change.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, April 11, 2007.On establishing legal 'artificially intelligent entities', etc: Burri, Thomas. “International Law and Artificial Intelligence.” SSRN Electronic Journal, 2017. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #48 - Gunkel on Robot Rights | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political structures in the information age and much much more. He is the author of several books, including Hacking Cyberspace, The Machine Question, Of Remixology, Gaming the System and, most recently, Robot Rights. We have a long debate/conversation about whether or not robots should/could have rights. You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:52 - Isn't the idea of robot rights ridiculous?3:37 - What is a robot anyway? Is the concept too nebulous/diverse?7:43 - Has science fiction undermined our ability to think about robots clearly?11:01 - What would it mean to grant a robot rights? (A precis of Hohfeld's theory of rights)18:32 - The four positions/modalities one could take on the idea of robot rights21:32 - The First Modality: Robots Can't Have Rights therefore Shouldn't23:37 - The EPSRC guidelines on robotics as an example of this modality26:04 - Criticisms of the EPSRC approach28:27 - Other problems with the first modality31:32 - Europe vs Japan: why the Japanese might be more open to robot 'others'34:00 - The Second Modality: Robots Can Have Rights therefore Should (some day)39:53 - A debate between myself and David about the second modality (why I'm in favour it and he's against it)47:17 - The Third Modality: Robots Can Have Rights but Shouldn't (Bryson's view)53:48 - Can we dehumanise/depersonalise robots?58:10 - The Robot-Slave Metaphor and its Discontents1:04:30 - The Fourth Modality: Robots Cannot Have Rights but Should (Darling's view)1:07:53 - Criticisms of the fourth modality1:12:05 - The 'Thinking Otherwise' Approach (David's preferred approach)1:16:23 - When can robots take on a face?1:19:44 - Is there any possibility of reconciling my view with David's?1:24:42 - So did David waste his time writing this book?   Relevant LinksDavid's HomepageRobot Rights from MIT Press, 2018 (and on Amazon)Episode 10 - Gunkel on Robots and Cyborgs'The other question: can and should robots have rights?' by David Gunkel'Facing Animals: A Relational Other-Oriented Approach to Moral Standing' by Gunkel and CoeckelberghThe Robot Rights Debate (Index) - everything I've written or said on the topic of robot rightsEPSRC Principles of RoboticsEpisode 24 - Joanna Bryson on Why Robots Should be Slaves'Patiency is not a virtue: the design of intelligent systems and systems of ethics' by Joanna BrysonRobo Sapiens Japanicus - by Jennifer Robertson #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #47 - Eubanks on Automating Inequality | File Type: audio/mpeg | Duration: Unknown

 In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She has worked for two decades in community technology and economic justice movements. We talk about the history of poverty management in the US and how it is now being infiltrated and affected by tools for algorithmic governance.  You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:39 - The future is unevenly distributed but not in the way you might think7:05 - Virginia's personal encounter with the tools for automating inequality12:33 - Automated helplessness?14:11 - The history of poverty management: denial and moralisation22:40 - Technology doesn't disrupt our ideology of poverty; it amplifies it24:16 - The problem of poverty myths: it's not just something that happens to other people28:23 - The Indiana Case Study: Automating the system for claiming benefits33:15 - The problem of automated defaults in the Indiana Case37:32 - What happened in the end?41:38 - The L.A. Case Study: A "match.com" for the homeless45:40 - The Allegheny County Case Study: Managing At-Risk Children52:46 - Doing the right things but still getting it wrong?58:44 - The need to design an automated system that addresses institutional bias1:07:45 - The problem of technological solutions in search of a problem1:10:46 - The key features of the digital poorhouse  Relevant LinksVirginia's HomepageVirginia on TwitterAutomating Inequality'A Child Abuse Prediction Model Fails Poor Families' by Virginia in WiredThe Allegheny County Family Screening Tool (official webpage - includes a critical response to Virginia's Wired article)'Can an Algorithm Tell when Kids Are in Danger?' by Dan Hurley (generally positive story about the family screening tool in the New York Times).'A Response to Allegheny County DHS' by Virginia (a response to Allegheny County's defence of the family screening tool)Episode 41 with Reuben Binns on Fairness in Algorithmic Decision-MakingEpisode 19 with Andrew Ferguson about Predictive Policing   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #45 - Vallor on Virtue Ethics and Technology | File Type: audio/mpeg | Duration: Unknown

 In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, sits on the Board of Directors of the Foundation for Responsible Robotics, and is a member of the IEEE Standards Association's Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. We talk about the problem of techno-social opacity and the value of virtue ethics in an era of rapid technological change.  You can download the episode here or listen below. You can also subscribe to the podcast on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:39 - How students encouraged Shannon to write Technology and the Virtues6:30 - The problem of acute techno-moral opacity12:34 - Is this just the problem of morality in a time of accelerating change?17:16 - Why can't we use abstract moral principles to guide us in a time of rapid technological change? What's wrong with utilitarianism or Kantianism?23:40 - Making the case for technologically-sensitive virtue ethics27:27 - The analogy with education: teaching critical thinking skills vs providing students with information31:19 - Aren't most virtue ethical traditions too antiquated? Aren't they rooted in outdated historical contexts?37:54 - Doesn't virtue ethics assume a relatively fixed human nature? What if human nature is one of the things that is changed by technology?42:34 - Case study on Social Media: Defending Mark Zuckerberg46:54 - The Dark Side of Social Media52:48 - Are we trapped in an immoral equilibrium? How can we escape?57:17 - What would the virtuous person do right now? Would he/she delete Facebook?1:00:23 - Can we use technological to solve problems created by technology? Will this help to cultivate the virtues?1:05:00 - The virtue of self-regard and the problem of narcissism in a digital age  Relevant LinksShannon's HomepageShannon's profile at Santa Clara UniversityShannon's Twitter profileTechnology and the Virtues (Now in Paperback!) - by Shannon'Social Networking Technology and the Virtues' by Shannon'Moral Deskilling and Upskilling in a New Machine Age' by Shannon'The Moral Problem of Accelerating Change' by John Danaher  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #44 - Fleischman on Evolutionary Psychology and Sex Robots | File Type: audio/mpeg | Duration: Unknown

In this episode I chat to Diana Fleischman. Diana is a senior lecturer in evolutionary psychology at the University of Portsmouth. Her research focuses on hormonal influences on behavior, human sexuality, disgust and, recently, the interface of evolutionary psychology and behaviorism. She is a utilitarian, a promoter of effective altruism, and a bivalvegan. We have a long and detailed chat about the evolved psychology of sex and how it may affect the social acceptance and use of sex robots. Along the way we talk about Mills and Boons novels, the connection between sexual stimulation and the brain, and other, no doubt controversial, topics. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:42 - Evolutionary Psychology and the Investment Theory of Sex5:54 - What's the evidence for the investment theory in humans?8:40 - Does the evidence for the theory hold up?11:45 - Studies on the willingness to engage in casual sex: do men and women really differ?18:33 - The ecological validity of these studies20:20 - Evolutionary psychology and the replication crisis23:29 - Are there better alternative explanations for sex differences?26:25 - Ethical criticisms of evolutionary psychology28:14 - Sex robots and evolutionary psychology29:33 - Argument 1: The rising costs of courtship will drive men into the arms of sexbots34:12 - Not all men...39:08 - Couldn't something similar be true for women?46:00 - Aren't the costs of courtship much higher for women?48:27 - Argument 2: Sex robots could be used as treatment for dangerous men51:50 - Would this stigmatise other sexbot users?53:31 - Would this embolden rather than satiate?55:53 - Could the logic of this argument be flipped, e.g. the Futurama argument?58:05 - Isn't this an ethically sub-optimal solution to the problem?1:00:42 - Argument 3: This will also impact on women's sexual behaviour1:07:01 - Do ethical objectors to sex robots underestimate the constraints of our evolved psychology?  Relevant LinksDiana's personal webpageDiana on TwitterDiana's academic homepage'Uncanny Vulvas' in Jacobite Magazine - this is the basis for much of our discussion in the podcast'Disgust Trumps Lust: Women’s Disgust and Attraction Towards Men Is Unaffected by Sexual Arousal' by Zsok, Fleischman, Borg and MorrisonBeyond Human Nature by Jesse Prinz'Which people would agree to have sex with a stranger?' by David Schmitt'Sex Work, Technological Unemployment and the Basic Income Guarantee' by John Danaher     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #43 - Elder on Friendship, Robots and Social Media | File Type: audio/mpeg | Duration: Unknown

 In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy - primarily Chinese and Greek - in order to think about current problems. She is the author of a number of articles on the philosophy of friendship, and her book Friendship, Robots, and Social Media: False Friends and Second Selves, came out in January 2018. We talk about all things to do with friendship, social media and social robots. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:37 - Aristotle's theory of friendship5:00 - The idea of virtue/character friendship10:14 - The enduring appeal of Aristotle's account of friendship12: 30 - Does social media corrode friendship?16:35 - The Publicity Objection to online friendships20:40 - The Superficiality Objection to online friendships25:23 - The Commercialisation/Contamination Objection to online friendships30:34 - Deception in online friendships35:18 - Must we physically interact with our friends?39:25 - Social robots as friends (with a specific focus on elderly populations and those on the autism spectrum)46:50 - Can you be friends with a robot? The counterfeit currency analogy50:55 - Does the analogy hold up?56:13 - Why are robotic friends assumed to be fake?1:03:50 - Does the 'falseness' of robotic friends depend on the type of friendship we are interested in?1:06:38 - What about companion animals?1:08:35 - Where is this debate going?  Relevant LinksAlexis Elder's webpage'Excellent Online Friendships: An Aristotelian Defence of Social Media' by Alexis'False Friends and False Coinage: a tool for navigating the ethics of sociable robots" by AlexisFriendship, Robots and Social Media by Alexis'Can you be friends with a robot? Aristotelian Friendship and Robotics' by John Danaher #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #42 - Earp on Psychedelics and Moral Enhancement | File Type: audio/mpeg | Duration: Unknown

 In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about moral enhancement and the potential use of psychedelics as a form of moral enhancement. You can download the episode here or listen below. You can also subscribe to the podcast on iTunes and Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:53 - Why psychedelics and moral enhancement?5:07 - What is moral enhancement anyway? Why are people excited about it?7:12 - What are the methods of moral enhancement?10:18 - Why is Brian sceptical about the possibility of moral enhancement?14:16 - So is it an empty idea?17:58 - What if we adopt an 'extended' concept of enhancement, i.e. beyond the biomedical?26:12 - Can we use psychedelics to overcome the dilemma facing the proponent of moral enhancement?29:07 - What are psychedelic drugs? How do they work on the brain?34:26 - Are your experiences whilst on psychedelic drugs conditional on your cultural background?37:39 - Dissolving the ego and the feeling of oneness41:36 - Are psychedelics the new productivity hack?43:48 - How can psychedelics enhance moral behaviour?47:36 - How can a moral philosopher make sense of these effects?51:12 - The MDMA case study58:38 - How about MDMA assisted political negotiations?1:02:11 - Could we achieve the same outcomes without drugs?1:06:52 - Where should the research go from here? Relevant LinksBrian's academia.edu pageBrian's researchgate pageBrian as Rob Walker (and his theatre reel)'Psychedelic moral enhancement' by Brian Earp'Moral Neuroenhancement' by Earp, Douglas and SavulescuHow to Change Your Mind by Michael PollanInterview with Ole Martin Moen in the ethics of psychedelicsThe Doors of Perception by Aldous HuxleyRoland Griffiths Laboratory at Johns Hopkins #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #41 - Binns on Fairness in Algorithmic Decision-Making | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism. You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).  Show notes0:00 - Introduction 1:46 - What is algorithmic decision-making? 4:20 - Isn't all decision-making algorithmic? 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate 12:02 - Limitations of the COMPAS debate 15:22 - Other examples of unfairness in algorithmic decision-making 17:00 - What is discrimination in decision-making? 19:45 - The mental state theory of discrimination 25:20 - Statistical discrimination and the problem of generalisation 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination 34:40 - Algorithmic typecasting: Could we all end up like William Shatner? 39:02 - Egalitarianism and algorithmic decision-making 43:07 - The role that luck and desert play in our understanding of fairness 49:38 - Deontic justice and historical discrimination in algorithmic decision-making 53:36 - Fair distribution vs Fair recognition 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making?  Relevant LinksReuben's homepage Reuben's institutional page  'Fairness in Machine Learning: Lessons from Political Philosophy' by Reuben Binns 'Algorithmic Accountability and Public Reason' by Reuben Binns 'It's Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making' by Binns et al 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #40: Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and who should be held responsible for them if something goes wrong. We chat about these issues and more. You can download the podcast here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here). Show Notes:0:00 - Introduction 1:22 - What is a self-driving car? 3:00 - Fatal crashes involving self-driving cars 5:10 - Could self-driving cars ever be completely safe? 8:14 - Limitations of the Trolley Problem 11:22 - What kinds of accident scenarios do we need to plan for? 17:18 - Who should decide which ethical rules a self-driving car follows? 23:47 - Why not randomise the ethical rules? 25:18 - Experimental findings on people's preferences with self-driving cars 29:16 - Is this just another typical applied ethical debate? 31:27 - What would a utilitarian self-driving car do? 36:30 - What would a Kantian self-driving car do? 39:33 - A contractualist approach to the ethics of self-driving cars 43:54 - The responsibility gap problem 46:12 - Scepticism of the responsibility gap: can self-driving cars be agents? 53:17 - A collaborative agency approach to self-driving cars 58:18 - So who should we blame if something goes wrong? 1:03:40 - Is there a duty to hand over driving to machines? 1:07:30 - Must self-driving cars be programmed to kill? Relevant LinksSven's faculty webpage 'The Ethics of Crashes with Self-Driving Cars, A Roadmap I' by Sven 'The Ethics of Crashes with Self-Driving Cars, A Roadmap II' by Sven 'Attributing Responsibility to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility Loci' by Sven 'The Ethics of Accident Algorithms for Self-Driving Cars: An Applied Trolley Problem' by Nyholm and Smids 'Automated Cars meet Human Drivers: responsible human-robot coordination and the ethics of mixed traffic' by Nyhom and Smids Episode #3 with Sven on Love Drugs, DBS and Self-Driving Cars Episode #23 with Liu on Responsibility and Discrimination in Self-Driving Cars #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #39 - Re-engineering Humanity with Frischmann and Selinger | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is Professor of Philosophy at the Rochester Institute of Technology. Their book looks at how modern techno-social engineering is affecting humanity. We have a long-ranging conversation about the main arguments and ideas from the book. The book features lots of interesting thought experiments and provocative claims. I recommend checking it out. I highlight of this conversation for me was our discussion of the 'Free Will Wager' and how it pertains to debates about technology and social engineering. You can listen to the episode below or download it here. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:33 - What is techno-social engineering?7:55 - Is techno-social engineering turning us into simple machines?14:11 - Digital contracting as an example of techno-social engineering22:17 - The three important ingredients of modern techno-social engineering29:17 - The Digital Tragedy of the Commons34:09 - Must we wait for a Leviathan to save us?44:03 - The Free Will Wager55:00 - The problem of Engineered Determinism1:00:03 - What does it mean to be self-determined?1:12:03 - Solving the problem? The freedom to be offRelevant LinksEvan Selinger's homepageBrett Frischmann's homepageRe-engineering Humanity - website'Reverse Turing Tests: Are humans becoming more machine-like?' by meEpisode 4 with Evan Selinger on Privacy and Algorithmic OutsourcingEpisode 7 with Brett Frischmann on Human-Focused Turing TestsGregg Caruso on 'Free Will Skepticism and Its Implications: An Argument for Optimism'Derk Pereboom on Relationships and Free Will #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #38 - Schwartz on the Ethics of Space Exploration | File Type: audio/mpeg | Duration: Unknown

  In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University.  His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System.  He is editor (with Tony Milligan) of The Ethics of Space Exploration (Springer 2016) and his publications have appeared in Advances in Space Research, Space Policy, Acta Astronautica, Astropolitics, Environmental Ethics, Ethics & the Environment, and Philosophia Mathematica.  He has also contributed chapters to The Meaning of Liberty Beyond Earth, Human Governance Beyond Earth, Dissent, Revolution and Liberty Beyond Earth (each edited by Charles Cockell), and to Yearbook on Space Policy 2015.  He is currently working on a book project, The Value of Space Science.  We talk about all things space-related, including the scientific case for space exploration and the myths that befuddle space advocacy.  You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:40 - Why did James get interested in the philosophy of space?3:17 - Is interest in the philosophy and ethics of space exploration on the rise?6:05 - Do space ethicists always say "no"?8:20 - Do we have a duty to explore space? If so, what kind of duty is this?10:30 - Space exploration and the duty to ensure species survival16:16 - The link between space ethics and environmental ethics: between misanthrophy and anthropocentrism19:33 - How would space exploration help human survival?23:20 - The scientific value of space exploration: manned or unmanned?28:30 - Why does the scientific case for space exploration take priority?35:40 - Is it our destiny to explore space?38:46 - Thoughts on Elon Musk and the Colonisation Project44:34 - The Myths of Space Advocacy51:40 - From space philosophy to space policy: getting rid of the myths58:55 - The future of space philosophy  Relevant LinksDr Schwartz's website - The Space Philosopher (with links to papers and works in progress)'Space Settlement: What's the rush?' - by James SchwartzMyth-Free Space Advocacy Part I, Part II, Part III, Part IV -by James SchwartzVideo of James's lecture on Worldship Ethics'Prioritizing Scientific Exploration: A Comparison of Ethical Justifications for Space Development and Space Science' - by James SchwartzEpisode 37 with Christopher Yorke (middle section deals with the prospects for a utopia in space). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #37 - Yorke on the Philosophy of Utopianism | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a 'utopia' is, why space exploration is associated with utopian thinking, and whether Bernard Suits' is correct to say that games are the highest ideal of human existence.   You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction2:00 - Why did Christopher choose to study utopianism?6:44 - What is a 'utopia'? Defining the ideal society14:00 - Is utopia practically achievable?19:34 - Why are dystopias easier to imagine that utopias?23:00 - Blueprints vs Horizons - different understandings of the utopian project26:40 - What do philosophers bring to the study of utopia?30:40 - Why is space exploration associated with utopianism?39:20 - Kant's Perpetual Peace vs the Final Frontier47:09 - Suits's Utopia of Games: What is a game?53:16 - Is game-playing the highest ideal of human existence?1:01:15 - What kinds of games will Suits's utopians play?1:14:41 - Is a post-instrumentalist society really intelligible?  Relevant LinksChristopher Yorke's Academia.edu page'Prospects for Utopia in Space' by Christopher Yorke'Endless Summer: What kinds of games will Suits's Utopians Play?' by Christopher Yorke'The Final Frontier: Space Exploration as Utopia Project' by John Danaher'The Utopia of Games: Intelligible or Unintelligible' by John DanaherOther posts on utopianism and the good lifeThe Grasshopper by Bernard Suits #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #36 - Wachter on Algorithms, Explanations, and the GDPR | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal and ethical implications of Big Data, AI, and robotics as well as governmental surveillance, predictive policing, and human rights online. Her current work deals with the ethical design of algorithms, including the development of standards and methods to ensure fairness, accountability, transparency, interpretability, and group privacy in complex algorithmic systems. You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here). Show Notes0:00 - Introduction2:05 - The rise of algorithmic/automated decision-making3:40 - Why are algorithmic decisions so opaque? Why is this such a concern?5:25 - What are the benefits of algorithmic decisions?7:43 - Why might we want a 'right to explanation' of algorithmic decisions?11:05 - Explaining specific decisions vs. explaining decision-making systems15:48 - Introducing the GDPR - What is it and why does it matter?19:29 - Is there a right to explanation embedded in Article 22 of the GDPR?23:30 - The limitations of Article 2227:40 - When do algorithmic decisions have 'significant effects'?29:30 - Is there a right to explanation in Articles 13 and 14 of the GDPR (the 'notification duties' provisions)?33:33 - Is there a right to explanation in Article 15 (the access right provision)?37:45 - Is there any hope that a right to explanation might be interpreted into the GDPR?43:04 - How could we explain algorithmic decisions? Introducing counterfactual explanations47:55 - Clarifying the concept of a counterfactual explanation51:00 - Criticisms and limitations of counterfactual explanations  Relevant LinksSandra's profile page at the Oxford Internet InstituteSandra's academia.edu page'Why a right to explanation does not exist in the General Data Protection Regulation' by Wachter, Mittelstadt and Floridi'Counterfactual explanations without opening the black box: Automated decisions and the GDPR' by Wachter, Mittelstadt and RussellThe General Data Protection RegulationArticle 29 working party guidance on the GDPRDo judges make stricter sentencing decisions when they are hungry? and a Reply     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #35 - Brundage on the Case for Conditional Optimism about AI | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford's Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal of Responsible Innovation (JRI). His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:00 - Why did Miles write the conditional case for AI optimism?5:07 - What is AI anyway?8:26 - The difference between broad and narrow forms of AI12:00 - Is the current excitement around AI hype or reality?16:13 - What is the conditional case for AI conditional upon?22:00 - The First Argument: The Value of Task Expedition29:30 - The downsides of task expedition and the problem of speed mismatches33:28 - How AI changes our cognitive ecology36:00 - The Second Argument: The Value of Improved Coordination40:50 - Wouldn't AI be used for malicious purposes too?45:00 - Can we create safe AI in the absence of global coordination?48:03 - The Third Argument: The Value of a Leisure Society52:30 - Would a leisure society really be utopian?56:24 - How were Miles's arguments received when presented at the EU parliament?  Relevant LinksMiles's HomepageMiles's past publicationsMiles at the Future of Humanity InstituteVideo of Miles's presentation to the EU Parliament (starts at approx 10:05:19 or 1 hour and 1 minute into the video)Olle Haggstrom's write-up about the EU parliament event'Cognitive Scarcity and Artificial Intelligence' by Miles Brundage and John Danaher   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Comments

Login or signup comment.