Philosophical Disquisitions show

Philosophical Disquisitions

Summary: Interviews with experts about the philosophy of the future.

Podcasts:

 Ethics of Academia (4) - Justin Weinberg | File Type: audio/mpeg | Duration: Unknown

In this episode of the Ethics of Academia, I chat to Justin Weinberg, Associate Professor of Philosophy at University of South Carolina. Justin researches ethical and social philosophy, as well as metaphilosophy. He is also the editor of the popular Daily Nous blog and has, as a result, developed an interest in many of the moral dimensions of philosophical academia. As a result, our conversation traverses a wide territory, from the purpose of philosophical research to the ethics of grading. You can download the episode here or listen below. You can also subscribe on Apple, Spotify, Google or any other preferred podcasting service. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Ethics of Academia (3) - Regina Rini | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Regina Rini, Canada Research Chair at York University in Toronto. Regina has a background in neuroscience and cognitive science but now works primarily in moral philosophy. She has the distinction of writing a lot of philosophy for the public through her columns for the Time Literary Supplement and the value of this becomes a major theme of our conversation. You can download the episode here or listen below. You can also subscribe on Apple, Spotify and other podcasting services. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Ethics of Academia (2) with Michael Cholbi | File Type: audio/mpeg | Duration: Unknown

This is the second episode in my short series on The Ethics of Academia. In this episode I chat to Michael Cholbi, Professor of Philosophy at the University of Edinburgh. We reflect on the value of applied ethical research and the right approach to teaching. Michael has thought quite a lot about the ethics of work, in general, and the ethics of teaching and grading in particular. So those become central themes in our conversation. You can download the podcast here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 The Ethics of Academia Podcast (Episode 1 with Sven Nyholm) | File Type: audio/mpeg | Duration: Unknown

I have been reflecting on the ethics of academic life for some time. I've written several articles about it over the years. These have focused on the ethics of grading, student-teacher relationships, academic career choice, and the value of teaching (among other things). I've only scratched the surface. It seems to me that academic life is replete with ethical dilemmas and challenges. Some systematic reflection on and discussion of those ethical challenges would seem desirable. Obviously, there is a fair bit of writing available on the topic but, as best I can tell, there is no podcast dedicated to it. So I decided to start one. I'm launching this podcast as both an addendum to my normal podcast (which deals primarily with the ethics of technology) and as an independent podcast in its own right. If you just want to subscribe to the Ethics of Academia, you can do so here (Apple and Spotify). (And if you do so, you'll get the added bonus of access to the first three episodes). I intend this to be a limited series but, if it proves popular, I might come back to it. In the first episode, I chat to Sven Nyholm (Utrecht University) about the ethics of research, teaching and administration. Sven is a longtime friend and collaborator. He has been one of my most frequent guests on my main podcast so he seemed like the ideal person to kickstart this series. Although we talk about a lot of different things, Sven draws particular attention to the ethical importance of the division of labour in academic life. You can download the episode here or listen below. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 98 - The Psychology of Human-Robot Interactions | File Type: audio/mpeg | Duration: Unknown

How easily do we anthropomorphise robots? Do we see them as moral agents or, even, moral patients? Can we dehumanise them? These are some of the questions addressed in this episode with my guests, Dennis Küster and Aleksandra Świderska. Dennis is a postdoctoral researcher at the University of Bremen. Aleksandra is a senior researcher at the University of Warsaw. They have worked together on a number of studies about how humans perceive and respond to robots. We discuss several of their joint studies in this episode. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksDennis's webpageAleksandra's webpage'I saw it on YouTube! How online videos shape perceptions of mind, morality, and fears about robots' by Dennis, Aleksandra and David Gunkel'Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism' by Aleksandra and Dennis'Seeing the mind of robots: Harm augments mind perception but benevolent intentions reduce dehumanisation of artificial entities in visual vignettes' by Dennis and Aleksandra #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 97 - The Perils of Predictive Policing (& Automated Decision-Making) | File Type: audio/mpeg | Duration: Unknown

One particularly important social institution is the police force, who are increasingly using technological tools to help efficiently and effectively deploy policing resources. I’ve covered criticisms of these tools in the past, but in this episode, my guest Daniel Susser has some novel perspectives to share on this topic, as well as some broader reflections on how humans can relate to machines in social decision-making. This one was a lot of fun and covered a lot of ground. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksDaniel's HomepageDaniel on Twitter'Predictive Policing and the Ethics of Preemption' by Daniel'Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making' by Daniel (and Kiel Brennan-Marquez and Karen Levy) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 96 - How Does Technology Mediate Our Morals? | File Type: audio/mpeg | Duration: Unknown

It is common to think that technology is morally neutral. “Guns don’t kill people; people kill people’ - as the typical gun lobby argument goes. But is this really the right way to think about technology? Could it be that technology is not so neutral as we might suppose? These are questions I explore today with my guest Olya Kudina. Olya is an ethicist of technology focusing on the dynamic interaction between values and technologies. Currently, she is an Assistant Professor at Delft University of Technology. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksOlya's HomepageOlya on TwitterThe technological mediation of morality: value dynamism, and the complex interaction between ethics and technology - Olya's PhD Thesis'Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy' by Olya and Peter Paul Verbeek"Alexa, who am I?”: Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making - by Olya'Moral Uncertainty in Technomoral Change: Bridging the Explanatory Gap' by Philip Nickel, Olya Kudina and Ibo van den Poel #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 95 - The Psychology of the Moral Circle | File Type: audio/mpeg | Duration: Unknown

I was raised in the tradition of believing that everyone is of equal moral worth. But when I scrutinise my daily practices, I don’t think I can honestly say that I act as if everyone is of equal moral worth. The idea that some people belong within the circle of moral concern and some do not is central to many moral systems. But what affects the dynamics of the moral circle? How does it contract and expand? Can it expand indefinitely? In this episode I discuss these questions with Joshua Rottman. Josh is an associate Professor in the Department of Psychology and the Program in Scientific and Philosophical Studies of Mind at Franklin and Marshall College. His research is situated at the intersection of cognitive development and moral psychology, and he primarily focuses on studying the factors that lead certain entities and objects to be attributed with (or stripped of) moral concern. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topics discussed include:The normative significance of moral psychologyThe concept of the moral circleHow the moral circle develops in childrenHow the moral circle changes over timeCan the moral circle expand indefinitely?Do we have a limited budget of moral concern?Do most people underuse their budget of moral concern?Why do some people prioritise the non-human world over marginal humans? Relevant Links Josh's webpage at F and M CollegeJosh's personal webpageJosh at Psychology Today'Tree huggers vs Human Lovers' by Josh et alSummary of the above article at Psychology Today'Towards a Psychology of Moral Expansiveness' by Crimston et al #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 94 - Robot Friendship and Hatred | File Type: audio/mpeg | Duration: Unknown

Can we move beyond the Aristotelian account of friendship when thinking about our relationships with robots? Can we hate robots? In this episode, I talk to Helen Ryland about these topics. Helen is a UK-based philosopher. She completed her PhD in Philosophy in 2020 at the University of Birmingham. She now works as an Associate Lecturer for The Open University. Her work examines human-robot relationships, video game ethics, and the personhood and moral status of marginal cases of human rights (e.g., subjects with dementia, nonhuman animals, and robots). You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show NotesTopics covered include:What is friendship and why does it matter?The Aristotelian account of friendshipLimitations of the Aristotelian accountMoving beyond AristotleThe degrees of friendship modelWhy we can be friends with robotsCriticisms of robot-human friendshipThe possibility of hating robotsDo we already hate robots?Why would it matter if we did hate robots? Relevant LinksHelen's homepage'It's Friendship Jim, But Not as We Know It:  A Degrees-of-Friendship View of Human–Robot Friendships' by HelenCould you hate a robot? Does it matter if you could? by Helen #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 93 - Will machines impede moral progress? | File Type: audio/mpeg | Duration: Unknown

Thomas Sinclair (left), Ben Kenward (right) Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).   Show NotesTopics discussed incude: What is a moral value?What is a moral machine?What is moral progress?Has society progress, morally speaking, in the past?How can we design moral machines?What's the problem with getting machines to follow our current moral consensus?Will people over-defer to machines? Will they outsource their moral reasoning to machines?Why is a lack of moral progress such a problem right now? Relevant LinksThomas's webpageBen's webpage'Machine morality, moral progress and the looming environmental disaster' by Ben and Tom

 92 - The Ethics of Virtual Worlds | File Type: audio/mpeg | Duration: Unknown

Are virtual worlds free from the ethical rules of ordinary life? Do they generate their own ethical codes? How do gamers and game designers address these issues? These are the questions that I explore in this episode with my guest Lucy Amelia Sparrow. Lucy is a PhD Candidate in Human-Computer Interaction at the University of Melbourne. Her research focuses on ethics and multiplayer digital games, with other interests in virtual reality and hybrid boardgames. Lucy is a tutor in game design and an academic editor, and has held a number of research and teaching positions at universities across Hong Kong and Australia. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here) Show NotesTopics discussed include: Are virtual worlds amoral? Do we value them for their freedom from ordinary moral rules?Is there an important distinction between virtual reality and games?Do games generate their own internal ethics?How prevalent are unwanted digitally enacted sexual interactions?How do gamers respond to such interactions? Do they take them seriously?How can game designers address this problem?Do gamers tolerate immoral actions more than the norm?Can there be a productive form of distrust in video game design? Relevant LinksLucy on TwitterLucy on Researchgate'Apathetic villagers and the trolls who love them' by Lucy Sparrow, Martin Gibbs and Michael Arnold'From ‘Silly’ to ‘Scumbag’: Reddit Discussion of a Case of Groping in a Virtual Reality Game' by Lucy et al'Productive Distrust: Playing with the player in digital games' by Lucy et al'The "digital animal intuition": the ethics of violence against animals in video games" by Simon Coghlan and Lucy Sparrow #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 91 - Rights for Robots, Animals and Nature? | File Type: audio/mpeg | Duration: Unknown

Should robots have rights? How about chimpanzees? Or rivers? Many people ask these questions individually, but few people have asked them all together at the same time. In this episode, I talk to a man who has. Josh Gellers is an Associate Professor in the Department of Political Science and Public Administration at the University of North Florida, a Fulbright Scholar to Sri Lanka, a Research Fellow of the Earth System Governance Project, and Core Team Member of the Global Network for Human Rights and the Environment. His research focuses on environmental politics, rights, and technology. He is the author of The Global Emergence of Constitutional Environmental Rights (Routledge 2017) and Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge 2020). We talk about the arguments and ideas in the latter book. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show notes Topics covered include:Should we even be talking about robot rights?What is a right? What's the difference between a legal and moral right?How do we justify the ascription of rights?What is personhood? Who counts as a person?Properties versus relations - what matters more when it comes to moral status?What can we learn from the animal rights case law?What can we learn from the Rights of Nature debate?Can we imagine a future in which robots have rights? What kinds of rights might those be? Relevant LinksJosh's homepageJosh on TwitterRights for Robots: Artificial Intelligence, Animal and Environmental Law  by Josh (digital version available Open Access)"Earth system law and the legal status of non-humans in the Anthropocene" by Josh #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 90 - The Future of Identity | File Type: audio/mpeg | Duration: Unknown

What does it mean to be human? What does it mean to be you? Philosophers, psychologists and sociologists all seem to agree that your identity is central to how you think of yourself and how you engage with others. But how are emerging technologies changing how we enact and constitute our identities? That's the subject matter of this podcast with Tracey Follows. Tracy is a professional futurist. She runs a consultancy firm called Futuremade. She is a regular writer and speaker on futurism. She has appeared on the BBC and is a contributing columnist with Forbes. She is also a member of the Association of Professional Futuriss and the World Futures Studies Federation. We talk about her book The Future of You: Can your identity survive the 21st Century? You can download the podcast here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).   Show Notes Topics covered in this episode include: The nature of identityThe link between technology and identityIs technology giving us more creative control over identity?Does technology encourage more conformity and groupthink?Is our identity being fragmented by technology?Who controls the technology of identity formation?How should we govern the technology of identity formation in the future? Relevant Links The Future of You by TraceyTracey on TwitterTracey at ForbesFuturemade consultancyTracey's talk to the London Futurists #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 89 - Is Morality All About Cooperation? | File Type: audio/mpeg | Duration: Unknown

What are the origins and dynamics of human morality? Is morality, at root, an attempt to solve basic problems of cooperation? What implications does this have for the future? In this episode, I chat to Dr Oliver Scott Curry about these questions. We discuss, in particular, his theory of morality as cooperation (MAC). Dr Curry is Research Director for Kindlab, at kindness.org. He is also a Research Affiliate at the School of Anthropology and Museum Ethnography, University of Oxford, and a Research Associate at the Centre for Philosophy of Natural and Social Science, at the London School of Economics. He received his PhD from LSE in 2005. Oliver’s academic research investigates the nature, content and structure of human morality. He tackles such questions as: What is morality? How did morality evolve? What psychological mechanisms underpin moral judgments? How are moral values best measured? And how does morality vary across cultures? To answer these questions, he employs a range of techniques from philosophy, experimental and social psychology and comparative anthropology. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Show NotesTopics discussed include: The nature of moralityThe link between human morality and cooperationThe seven types of cooperation How these seven types of cooperation generate distinctive moral normsThe evidence for the theory of morality as cooperationIs the theory underinclusive, reductive and universalist? Is that a problem?Is the theory overinclusive? Could it be falsified?Why Morality as Cooperation is better than Moral Foundations TheoryThe future of cooperation Relevant linksOliver's webpageOliver on TwitterOliver's Podcast - The Map'Morality as Cooperation: A Problem-Centred Approach' by Oliver (sets out the theory of MAC)'Morality is fundamentally an evolved solution to problems of social co-operation' (debate at the Royal Anthropological Society)'Moral Molecules: Morality as a combinatorial system' by Oliver and his colleagues'Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies' by Oliver and colleagues'What is wrong with moral foundations theory?' by Oliver Subscribe to the newsletter

 88 - The Ethics of Social Credit Systems | File Type: audio/mpeg | Duration: Unknown

Should we use technology to surveil, rate and punish/reward all citizens in a state? Do we do it anyway? In this episode I discuss these questions with Wessel Reijers, focusing in particular on the lessons we can learn from the Chinese Social Credit System. Wessel is a postdoctoral Research Associate at the European University Institute, working in the ERC project “BlockchainGov”, which looks into the legal and ethical impacts of distributed governance. His research focuses on the philosophy and ethics of technology, notably on the development of a critical hermeneutical approach to technology and the investigation of the role of emerging technologies in the shaping of citizenship in the 21st century. He completed his PhD at the Dublin City University with a Dissertation entitled “Practising Narrative Virtue Ethics of Technology in Research and Innovation”. In addition to a range of peer-reviewed articles, he recently published the book Narrative and Technology Ethics with Palgrave, which he co-authored with Mark Coeckbelbergh. You can download the episode here or listen below.You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).   Show Notes Topics discussed in this episode includeThe Origins of the Chinese Social Credit SystemHistorical Parallels to the SystemSocial Credit Systems in Western CulturesIs China exceptional when it comes to the use of these systems?The impact of social credit systems on human values such as freedom and authenticityHow the social credit system is reshaping citizenshipThe possible futures of social credit systems Relevant LinksWessel's homepageWessel on Twitter'A Dystopian Future? The Rise of Social Credit Systems' - a written debate featuring Wessel'How to Make the Perfect Citizen? Lessons from China's Model of Social Credit System' by Liav Orgad and Wessel ReijersNarrative and Technology Ethics by Wessel Reijers and Mark Coeckelbergh #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Comments

Login or signup comment.