Philosophical Disquisitions show

Philosophical Disquisitions

Summary: Interviews with experts about the philosophy of the future.

Podcasts:

 Ethics of Academia (4) - Justin Weinberg | File Type: audio/mpeg | Duration: Unknown

In this episode of the Ethics of Academia, I chat to Justin Weinberg, Associate Professor of Philosophy at University of South Carolina. Justin researches ethical and social philosophy, as well as metaphilosophy. He is also the editor of the popular Daily Nous blog and has, as a result, developed an interest in many of the moral dimensions of philosophical academia. As a result, our conversation traverses a wide territory, from the purpose of philosophical research to the ethics of grading. You can download the episode here or listen below. You can also subscribe on Apple, Spotify, Google or any other preferred podcasting service. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Ethics of Academia (3) - Regina Rini | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Regina Rini, Canada Research Chair at York University in Toronto. Regina has a background in neuroscience and cognitive science but now works primarily in moral philosophy. She has the distinction of writing a lot of philosophy for the public through her columns for the Time Literary Supplement and the value of this becomes a major theme of our conversation. You can download the episode here or listen below. You can also subscribe on Apple, Spotify and other podcasting services. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Ethics of Academia (2) with Michael Cholbi | File Type: audio/mpeg | Duration: Unknown

This is the second episode in my short series on The Ethics of Academia. In this episode I chat to Michael Cholbi, Professor of Philosophy at the University of Edinburgh. We reflect on the value of applied ethical research and the right approach to teaching. Michael has thought quite a lot about the ethics of work, in general, and the ethics of teaching and grading in particular. So those become central themes in our conversation. You can download the podcast here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 The Ethics of Academia Podcast (Episode 1 with Sven Nyholm) | File Type: audio/mpeg | Duration: Unknown

I have been reflecting on the ethics of academic life for some time. I've written several articles about it over the years. These have focused on the ethics of grading, student-teacher relationships, academic career choice, and the value of teaching (among other things). I've only scratched the surface. It seems to me that academic life is replete with ethical dilemmas and challenges. Some systematic reflection on and discussion of those ethical challenges would seem desirable. Obviously, there is a fair bit of writing available on the topic but, as best I can tell, there is no podcast dedicated to it. So I decided to start one. I'm launching this podcast as both an addendum to my normal podcast (which deals primarily with the ethics of technology) and as an independent podcast in its own right. If you just want to subscribe to the Ethics of Academia, you can do so here (Apple and Spotify). (And if you do so, you'll get the added bonus of access to the first three episodes). I intend this to be a limited series but, if it proves popular, I might come back to it. In the first episode, I chat to Sven Nyholm (Utrecht University) about the ethics of research, teaching and administration. Sven is a longtime friend and collaborator. He has been one of my most frequent guests on my main podcast so he seemed like the ideal person to kickstart this series. Although we talk about a lot of different things, Sven draws particular attention to the ethical importance of the division of labour in academic life. You can download the episode here or listen below. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 98 - The Psychology of Human-Robot Interactions | File Type: audio/mpeg | Duration: Unknown

How easily do we anthropomorphise robots? Do we see them as moral agents or, even, moral patients? Can we dehumanise them? These are some of the questions addressed in this episode with my guests, Dennis Küster and Aleksandra Świderska. Dennis is a postdoctoral researcher at the University of Bremen. Aleksandra is a senior researcher at the University of Warsaw. They have worked together on a number of studies about how humans perceive and respond to robots. We discuss several of their joint studies in this episode. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksDennis's webpageAleksandra's webpage'I saw it on YouTube! How online videos shape perceptions of mind, morality, and fears about robots' by Dennis, Aleksandra and David Gunkel'Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism' by Aleksandra and Dennis'Seeing the mind of robots: Harm augments mind perception but benevolent intentions reduce dehumanisation of artificial entities in visual vignettes' by Dennis and Aleksandra #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 #62 - Häggström on AI Motivations and Risk Denialism | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years he has broadened his research interests to focus applied statistics, philosophy, climate science, artificial intelligence and social consequences of future technologies. He is the author of Here be Dragons: Science, Technology and the Future of Humanity (OUP 2016). We talk about AI motivations, specifically the Omohundro-Bostrom theory of AI motivation and its weaknesses. We also discuss AI risk denialism. You can download the episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:02 - Do we need to define AI?4:15 - The Omohundro-Bostrom theory of AI motivation7:46 - Key concepts in the Omohundro-Bostrom Theory: Final Goals vs Instrumental Goals10:50 - The Orthogonality Thesis14:47 - The Instrumental Convergence Thesis20:16 - Resource Acquisition as an Instrumental Goal22:02 - The importance of goal-content integrity25:42 - Deception as an Instrumental Goal29:17 - How the doomsaying argument works31:46 - Critiquing the theory: the problem of self-referential final goals36:20 - The problem of incoherent goals42:44 - Does the truth of moral realism undermine the orthogonality thesis?50:50 - Problems with the distinction between instrumental goals and final goals57:52 - Why do some people deny the problem of AI risk?1:04:10 - Strong versus Weak AI Scepticism1:09:00 - Is it difficult to be taken seriously on this topic?   Relevant LinksOlle's Blog Olle's webpage at Chalmers University'Challenges to the Omohundro-Bostrom framework for AI Motivations' by Olle (highly recommended)'The Superintelligent Will' by Nick Bostrom'The Basic AI Drives' by Stephen OmohundroOlle Häggström: Science, Technology, and the Future of Humanity (video)Olle Häggström and Thore Husveldt debate AI Risk (video)Summary of Bostrom's theory (by me)'Why AI doomsayers are like sceptical theists and why it matters' by me   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 #61 - Yampolskiy on Machine Consciousness and AI Welfare | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare. You can listen below or download here. You can also subscribe to the podcast on Apple, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:30 - Artificial minds versus Artificial Intelligence6:35 - Why talk about machine consciousness now when it seems far-fetched?8:55 - What is phenomenal consciousness?11:04 - Illusions as an insight into phenomenal consciousness18:22 - How to create an illusion-based test for machine consciousness23:58 - Challenges with operationalising the test31:42 - Does AI already have a minimal form of consciousness?34:08 - Objections to the proposed test and next steps37:12 - Towards a science of AI welfare40:30 - How do we currently test for animal and human welfare44:10 - Dealing with the problem of deception47:00 - How could we test for welfare in AI?52:39 - If an AI can suffer, do we have a duty not to create it?56:48 - Do people take these ideas seriously in computer science?58:08 - What next? Relevant LinksRoman's homepage'Detecting Qualia in Natural and Artificial Agents' by Roman'Towards AI Welfare Science and Policies' by Soenke Ziesche and Roman YampolskiyThe Hard Problem of Consciousness25 famous optical illusionsCould AI get depressed and have hallucinations? #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 #58 - Neely on Augmented Reality, Ethics and Property Rights | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and augmented reality. We chat about the ethics of augmented reality, with a particular focus on property rights and the problems that arise when we blend virtual and physical reality together in augmented reality platforms. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other services (the RSS feed is here). Show Notes0:00 - Introduction1:00 - What is augmented reality (AR)?5:55 - Is augmented reality overhyped?10:36 - What are property rights?14:22 - Justice and autonomy in the protection of property rights16:47 - Are we comfortable with property rights over virtual spaces/objects?22:30 - The blending problem: why augmented reality poses a unique problem for the protection of property rights27:00 - The different modalities of augmented reality: single-sphere or multi-sphere?30:45 - Scenario 1: Single-sphere AR with private property34:28 - Scenario 2: Multi-sphere AR with private property37:30 - Other ethical problems in scenario 243:25 - Augmented reality vs imagination47:15 - Public property as contested space49:38 - Scenario 3: Multi-sphere AR with public property54:30 - Scenario 4: Single-sphere AR with public property1:00:28 - Must the owner of the single-sphere AR platform be regulated as a public utility/entity?1:02:25 - Other important ethical issues that arise from the use of AR Relevant LinksErica's Homepage'Augmented Reality, Augmented Ethics: Who Has the Right to Augment a Particular Physical Space?' by Erica'The Ethics of Choice in Single Player Video Games' by Erica'The Risks of Revolution: Ethical Dilemmas in 3D Printing from a US Perspective' by Erica'Machines and the Moral Community' by EricaIKEA Place augmented reality appL'Oreal's use of augmented reality make-up appsHolocaust Museum Bans Pokemon Go  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 #57 - Sorgner on Nietzschean Transhumanism | File Type: audio/mpeg | Duration: Unknown

In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow at the Ethics Centre of the Friedrich-Schiller-University in Jena. His main fields of research are Nietzsche, the philosophy of music, bioethics and meta-, post- and transhumanism. We talk about his case for a Nietzschean form of transhumanism. You can download the episode here or listen below. You can also subscribe to the podcast on iTunes, Stitcher and a variety of other podcasting apps (the RSS feed is here). Show Notes0:00 - Introduction2:12 - Recent commentary on Stefan's book Ubermensch3:41 - Understanding transhumanism - getting away from the "humanism on steroids" ideal10:33 - Transhumanism as an attitude of experimentation and not a destination?13:34 - Have we always been transhumanists?16:51 - Understanding Nietzsche22:30 - The Will to Power in Nietzschean philosophy26:41 - How to understand "power" in Nietzschean terms30:40 - The importance of perspectivalism and the abandonment of universal truth36:40 - Is it possible for a Nietzschean to consistently deny absolute truth?39:55 - The idea of the Ubermensch (Overhuman)45:48 - Making the case for a Nietzschean form of transhumanism51:00 - What about the negative associations of Nietzsche?1:02:17 - The problem of moral relativism for transhumanists Relevant LinksStefan's homepageThe Ubermensch: A Plea for a Nietzschean Transhumanism - Stefan's new book (in German)Posthumanism and Transhumanism: An Introduction - edited by Stefan and Robert Ranisch"Nietzsche, the Overhuman and Tranhumanism" by Stefan (open access)"Beyond Humanism: Reflections on Trans and Post-humanism" by Stefan (a response to critics of the previous article)Nietzsche at the Stanford Encyclopedia of Philosophy #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 97 - The Perils of Predictive Policing (& Automated Decision-Making) | File Type: audio/mpeg | Duration: Unknown

One particularly important social institution is the police force, who are increasingly using technological tools to help efficiently and effectively deploy policing resources. I’ve covered criticisms of these tools in the past, but in this episode, my guest Daniel Susser has some novel perspectives to share on this topic, as well as some broader reflections on how humans can relate to machines in social decision-making. This one was a lot of fun and covered a lot of ground. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksDaniel's HomepageDaniel on Twitter'Predictive Policing and the Ethics of Preemption' by Daniel'Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making' by Daniel (and Kiel Brennan-Marquez and Karen Levy) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 96 - How Does Technology Mediate Our Morals? | File Type: audio/mpeg | Duration: Unknown

It is common to think that technology is morally neutral. “Guns don’t kill people; people kill people’ - as the typical gun lobby argument goes. But is this really the right way to think about technology? Could it be that technology is not so neutral as we might suppose? These are questions I explore today with my guest Olya Kudina. Olya is an ethicist of technology focusing on the dynamic interaction between values and technologies. Currently, she is an Assistant Professor at Delft University of Technology. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksOlya's HomepageOlya on TwitterThe technological mediation of morality: value dynamism, and the complex interaction between ethics and technology - Olya's PhD Thesis'Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy' by Olya and Peter Paul Verbeek"Alexa, who am I?”: Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making - by Olya'Moral Uncertainty in Technomoral Change: Bridging the Explanatory Gap' by Philip Nickel, Olya Kudina and Ibo van den Poel #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 95 - The Psychology of the Moral Circle | File Type: audio/mpeg | Duration: Unknown

I was raised in the tradition of believing that everyone is of equal moral worth. But when I scrutinise my daily practices, I don’t think I can honestly say that I act as if everyone is of equal moral worth. The idea that some people belong within the circle of moral concern and some do not is central to many moral systems. But what affects the dynamics of the moral circle? How does it contract and expand? Can it expand indefinitely? In this episode I discuss these questions with Joshua Rottman. Josh is an associate Professor in the Department of Psychology and the Program in Scientific and Philosophical Studies of Mind at Franklin and Marshall College. His research is situated at the intersection of cognitive development and moral psychology, and he primarily focuses on studying the factors that lead certain entities and objects to be attributed with (or stripped of) moral concern. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topics discussed include:The normative significance of moral psychologyThe concept of the moral circleHow the moral circle develops in childrenHow the moral circle changes over timeCan the moral circle expand indefinitely?Do we have a limited budget of moral concern?Do most people underuse their budget of moral concern?Why do some people prioritise the non-human world over marginal humans? Relevant Links Josh's webpage at F and M CollegeJosh's personal webpageJosh at Psychology Today'Tree huggers vs Human Lovers' by Josh et alSummary of the above article at Psychology Today'Towards a Psychology of Moral Expansiveness' by Crimston et al #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 94 - Robot Friendship and Hatred | File Type: audio/mpeg | Duration: Unknown

Can we move beyond the Aristotelian account of friendship when thinking about our relationships with robots? Can we hate robots? In this episode, I talk to Helen Ryland about these topics. Helen is a UK-based philosopher. She completed her PhD in Philosophy in 2020 at the University of Birmingham. She now works as an Associate Lecturer for The Open University. Her work examines human-robot relationships, video game ethics, and the personhood and moral status of marginal cases of human rights (e.g., subjects with dementia, nonhuman animals, and robots). You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show NotesTopics covered include:What is friendship and why does it matter?The Aristotelian account of friendshipLimitations of the Aristotelian accountMoving beyond AristotleThe degrees of friendship modelWhy we can be friends with robotsCriticisms of robot-human friendshipThe possibility of hating robotsDo we already hate robots?Why would it matter if we did hate robots? Relevant LinksHelen's homepage'It's Friendship Jim, But Not as We Know It:  A Degrees-of-Friendship View of Human–Robot Friendships' by HelenCould you hate a robot? Does it matter if you could? by Helen #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 93 - Will machines impede moral progress? | File Type: audio/mpeg | Duration: Unknown

Thomas Sinclair (left), Ben Kenward (right) Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).   Show NotesTopics discussed incude: What is a moral value?What is a moral machine?What is moral progress?Has society progress, morally speaking, in the past?How can we design moral machines?What's the problem with getting machines to follow our current moral consensus?Will people over-defer to machines? Will they outsource their moral reasoning to machines?Why is a lack of moral progress such a problem right now? Relevant LinksThomas's webpageBen's webpage'Machine morality, moral progress and the looming environmental disaster' by Ben and Tom

 92 - The Ethics of Virtual Worlds | File Type: audio/mpeg | Duration: Unknown

Are virtual worlds free from the ethical rules of ordinary life? Do they generate their own ethical codes? How do gamers and game designers address these issues? These are the questions that I explore in this episode with my guest Lucy Amelia Sparrow. Lucy is a PhD Candidate in Human-Computer Interaction at the University of Melbourne. Her research focuses on ethics and multiplayer digital games, with other interests in virtual reality and hybrid boardgames. Lucy is a tutor in game design and an academic editor, and has held a number of research and teaching positions at universities across Hong Kong and Australia. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here) Show NotesTopics discussed include: Are virtual worlds amoral? Do we value them for their freedom from ordinary moral rules?Is there an important distinction between virtual reality and games?Do games generate their own internal ethics?How prevalent are unwanted digitally enacted sexual interactions?How do gamers respond to such interactions? Do they take them seriously?How can game designers address this problem?Do gamers tolerate immoral actions more than the norm?Can there be a productive form of distrust in video game design? Relevant LinksLucy on TwitterLucy on Researchgate'Apathetic villagers and the trolls who love them' by Lucy Sparrow, Martin Gibbs and Michael Arnold'From ‘Silly’ to ‘Scumbag’: Reddit Discussion of a Case of Groping in a Virtual Reality Game' by Lucy et al'Productive Distrust: Playing with the player in digital games' by Lucy et al'The "digital animal intuition": the ethics of violence against animals in video games" by Simon Coghlan and Lucy Sparrow #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Comments

Login or signup comment.