Philosophical Disquisitions show

Philosophical Disquisitions

Summary: Interviews with experts about the philosophy of the future.

Podcasts:

 Episode #34 - Lin on the Rise of Cyborg Finance | File Type: audio/mpeg | Duration: Unknown

 In this episode I talk to Tom Lin. Tom is a Professor of Law at Temple University’s Beasley School of Law. His research and teaching expertise are in the areas of corporations, securities regulation, financial technology, financial regulation, and compliance. Professor Lin and his research has been published and cited by numerous leading law journals, and featured in The Wall Street Journal, Bloomberg News, and The Financial Times, among other media outlets. We talk about the rise of 'cyborg finance' (Cy-Fi) and the regulatory challenges it poses.  You can download the episode here, or listen below. You can also subscribe on Apple Podcasts or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:30 - What is cyborg finance?5:57 - What explains the rise of cyborg finance? Innovation, Regulation and Competition9:00 - The problem of systemic risk in the financial system15:05 - "Too Linked to Fail" - The first systemic risk of cyborg finance19:30 - "Too fast to save" - the second systemic risk of cyborg finance23:00 - The problem of short-term thinking in the financial system27:15 - Does cyborg finance undermine the idea of the 'reasonable investor'?34:57 - The problem of cybernetic market manipulation37:44 - Are these genuinely novel threats or old threats in a new guise?41:11 - Regulatory principles and values for the age of cyborg finance  Relevant linksTom's faculty webpageTom's SSRN page"The New Investor" by Tom Lin"The New Financial Industry" by Tom Lin"The New Market Manipulation" by Tom LinEpisode #22 - Wellman and Rajan on Automated TradingEpisode #25 - McNamara on Fairness, Utility and High Frequency Trading #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #33: McArthur and Danaher on Robot Sex | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Neil McArthur about a book that he and I recently co-edited entitled Robot Sex: Social and Ethical Implications (MIT Press, 2017). Neil is a Professor of Philosophy at the University of Manitoba where he also directs the Center for Professional and Applied Ethics. This a free-ranging conversation. We talk about what got us interested in the topic of robot sex, our own arguments and ideas, some of the feedback we've received on the book, some of our favourite sexbot-related media, and where we think the future of the debate might go. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction to Neil1:42 - How did Neil go from writing about David Hume to Robot Sex?5:15 - Why did I (John Danaher) get interested in this topic?6:49 - The astonishing media interest in robot sex8:58 - Why did we put together this book?11:05 - Neil's general outlook on the robot sex debate16:41 - Could sex robots address the problems of loneliness and isolation?19:46 - Why a passive and compliant sex robot might be good thing21:08 - Could sex robots enhance existing human relationships?25:53 - Sexual infidelity and the intermediate ontological status of sex robots31:23 - Ethical behaviourism and robots34:36 - My perspective on the robot sex debate37:32 - Some legitimate concerns about robot sex44:20 - Some of our favourite arguments or ideas from the book (acknowledging that all the contributions are excellent!)54:37 - Neil's booklaunch - some of the feedback from a lay audience58:25 - Where will the debate go in the future? Neil's thoughts on the rise of the digisexual1:02:54 - Our favourite fictional sex robots  Relevant linksRobot Sex: Social and Ethical Implications (available on Amazon, BookDepository and from the Publisher)Neil's homepageMedia coverage of our bookThe Status Quo bias in applied ethicsThe Sex Robots are Coming: Seedy, sordid but mainly just sad' by Fiona SturgesOur Guardian op-ed on the potential upside of sex robotsRichard Herring's sex robot sketchesNeil's article on the rise of the digisexualNeil's one-man show on cryonics "Let Me Freeze Your Head!"   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #32 - Carter and Palermos on Extended Cognition and Extended Assault | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Adam Carter and Orestis Palermos. Adam is a Lecturer in Philosophy at the University of Glasgow. His primary research interests lie in the area of epistemology, but he has increasingly explored connections between epistemology and other disciplines, including bioethics (especially human enhancement); the philosophy of mind and cognitive science. Orestis is a lecturer in philosophy at Cardiff University. His research focuses on how ‘philosophy can impact the engineering of emerging technologies and socio-technical systems.’ We talk about the theory of the extended mind and the idea of extended assault. You can download the episode here or listen to it below. You can also subscribe on iTunes and Stitcher (RSS feed). Show Notes0:00 - Introduction0:55 - The story of David Leon Riley and the phone search3:15 - What is extended cognition?7:35 - Extended cognition vs extended mind - exploring the difference13:35 - What counts as part of an extended cognitive system? The role of dynamical systems theory19:14 - Does cognitive extension come in degrees?24:18 - Are smartphones part of our extended cognitive systems?28:10 - Are we over-extended? Do we rely too much on technology?35:02 - Making the case for extended personal assault39:50 - Does functional disability make a difference to the case for extended assault?43:35 - Does pain matter to our understanding of assault?49:50 - Does the replaceability/fungibility of technology undermine the case for extended assault?55:00 - Online hacking as a form of personal assault59:30 - The ethics of extended expertise1:02:58 - Distributed cognition and distributed blame  Relevant LinksJ Adam Carter's homepageOrestis Palermos's homepage'Is having your computer compromised a personal assault? The ethics of extended cognition' by Carter and Palermos'Extended Cognition and the Possibility of Extended Assault' by John Danaher (summary of the above paper)Dynamical systems theoryClark and Chalmers 'The Extended Mind'Garry Kasparov Deep ThinkingRichard Heersmink 'The Internet, Cognitive Enhancement and the Values of Cognition'      #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #31 - Hartzog on Robocops and Automated Law Enforcement | File Type: audio/mpeg | Duration: Unknown

In this episode I am joined by Woodrow Hartzog. Woodrow is currently a Professor of Law and Computer Science at Northeastern University (he was the Starnes Professor at Samford University’s Cumberland School of Law when this episode was recorded). His research focuses on privacy, human-computer interaction, online communication, and electronic agreements. He holds a Ph.D. in mass communication from the University of North Carolina at Chapel Hill, an LL.M. in intellectual property from the George Washington University Law School, and a J.D. from Samford University. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center. We talk about the rise of automated law enforcement and the virtue of an inefficient legal system. You can download the episode here or listen below. You can also subscribe to the podcast via iTunes or Stitcher (RSS feed is here). Show Notes0:00 - Introduction2:00 - What is automated law enforcement? The 3 Steps6:30 - What about the robocops?10:00 - The importance of hidden forms of automated law enforcement12:55 - What areas of law enforcement are ripe for automation?17:53 - The ethics of automated prevention vs automated punishment23: 10 - The three reasons for automated law enforcement26:00 - The privacy costs of automated law enforcement32:13 - The virtue of discretion and inefficiency in the application of law40:10 - An empirical study of automated law enforcement44:35 - The conservation of inefficiency principle48:40 - The practicality of conserving inefficiency51:20 - Should we keep a human in the loop?55:10 - The rules vs standards debate in automated law enforcement58:36 - Can we engineer inefficiency into automated systems1:01:10 - When is automation desirable in law?  Relevant LinksWoody's homepageWoody's SSRN page'Inefficiently Automated Law Enforcement' by Woodrow Hartzog, Gregory Conti, John Nelson and Lisa Shay'Obscurity and Privacy' by Woodrow Hartzog and Evan SelingerEpisode 4 with Evan Selinger on Algorithmic Outsourcing and PrivacyKnightscope RobotsRobocop joins Dubai police to fight real life crime    #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #30 - Bartholomew on Adcreep and the Case Against Modern Marketing | File Type: audio/mpeg | Duration: Unknown

In this episode I am joined by Mark Bartholomew. Mark is a Professor at the University of Buffalo School of Law. He writes and teaches in the areas of intellectual property and law and technology, with an emphasis on copyright, trademarks, advertising regulation, and online privacy. His book Adcreep: The Case Against Modern Marketing was recently published by Stanford University Press. We talk about the main ideas and arguments from this book. You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (RSS is here). Show Notes0:00 - Introduction0:55 - The crisis of attention2:05 - Two types of Adcreep3:33 - The history of advertising and its regulation9:26 - Does the history tell a clear story?12:16 - Differences between Europe and the US13:48 - How public and private spaces have been colonised by marketing16:58 - The internet as an advertising medium19:30 - Why have we tolerated Adcreep?25:32 - The corrupting effect of Adcreep on politics32:10 - Does advertising shape our identity?36:39 - Is advertising's effect on identity worse than that other external forces?40:31 - The modern technology of advertising45:44 - A digital panopticon that hides in plain sight48:22 - Neuromarketing: hype or reality?55:26 - Are we now selling ourselves all the time?1:04:52 - What can we do to redress adcreep?  Relevant LinksMark's homepageAdcreep: the Case Against Modern Marketing'Is there any way to stop adcreep?' by Mark'Branding Politics: Emotion, authenticity, and the marketing culture of American political communication' by Michael Serazio'The Presentation of the Self in Everyday Life' by Irving Goffman     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #29 - Moore on the Quantified Worker | File Type: audio/mpeg | Duration: Unknown

In this episode, I talk to Phoebe Moore. Phoebe is a researcher and a Senior Lecturer in International Relations at Middlesex University. She teaches International Relations and International Political Economy and has published several books, articles and reports about labour struggle, industrial relations and the impact of technology on workers' everyday lives. Her current research, funded by a BA/Leverhulme award, focuses on the use of self-tracking devices in companies. She is the author of a book on this topic entitled The Quantified Self in Precarity: Work, Technology and What Counts, which has just been published. We talk about the quantified self movement, the history of workplace surveillance, and a study that Phoebe did on tracking in a Dutch company. You can download the episode here, or listen below. You can also subscribe on iTunes and Stitcher. Show Notes0:00 - Introduction1:27 - Origins and Ethos of the Quantified Self Movement7:39 - Does self-tracking promote or alleviate anxiety?10:10 - The importance of gamification13:09 - The history of workplace surveillance (Taylor and the Gilbreths)16:27 - How is workplace quantification different now?20:26 - The Agility Agenda: Workplace surveillance in an age of precarity29:09 - Tracking affective/emotional labour34:08 - Getting the opportunity to study the quantified worker in the field38:18 - Can such workplace self-tracking exercises ever be truly voluntary?41:05 - What were the key findings of the study?46:07 - Why was there such a high drop-out rate?49:37 - Did workplace tracking lead to increased competitiveness?53:32 - Should we welcome or resist the quantified worker phenomenon? Relevant LinksPhoebe's WebpageThe book: The Quantified Self in Precarity: Work, Technology and What CountsThe Quantified Self Movement Homepage'Regulating Well-Being in the Brave New Quantified Workplace' by Phoebe Moore and Lukasz Piwek'The Quantified Self: What Counts in the Neoliberal Workplace' by Phoebe Moore and Andrew RobinsonPrevious interview with Deborah Lupton about her work on the quantified self #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #28 - Walch on the Misunderstandings of Blockchain Technology | File Type: audio/mpeg | Duration: Unknown

In this episode I am joined by Angela Walch. Angela is an Associate Professor at St. Mary’s University School of Law. Her research focuses on money and the law, blockchain technologies, governance of emerging technologies and financial stability. She is a Research Fellow of the Centre for Blockchain Technologies of University College London. Angela was nominated for “Blockchain Person of the Year” for 2016 by Crypto Coins News for her work on the governance of blockchain technologies. She joins me for a conversation about the misleading terms used to describe blockchain technologies. You can download the episode here. You can listen below. You can also subscribe on iTunes or Stitcher. Show Notes0:00 - Introduction2:06 - What is a blockchain?6:15 - Is the blockchain distributed or shared?7:57 - What's the difference between a public and private blockchain?11:20 - What's the relationship between blockchains and currencies?18:43 - What is miner? What's the difference between a full node and a partial node?22:25 - Why is there so much confusion associated with blockchains?29:50 - Should we regulate blockchain technologies?36:00 - The problems of inconsistency and perverse innovation41:40 - Why blockchains are not 'immutable'58:04 - Why blockchains are not 'trustless'1:00:00 - Definitional problems in practice1:02:37 - What is to be done about the problem?  Relevant LinksAngela's HomepageAngela's Academia and SSRN pages'The Path of the Blockchain Lexicon (and the Law)' by Angela Walch'Call blockchain developers what they are: fiduciaries' by Angela WalchInterview with Aaron Wright on Blockchain Technology and the LawInterview with Rachel O'Dwyer On Bitcoin, Blockchains and the Digital Commons   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #27 - Gilbert on the Ethics of Predictive Brain Implants | File Type: audio/mpeg | Duration: Unknown

In this episode I am joined by Frédéric Gilbert. Frédéric is a philosopher and bioethicist who is affiliated with quite a number of universities and research institutes around the world. He is currently a Scientist Fellow at the University of Washington (UW), in Seattle, US but has a concomitant appointment with the Department of Medicine, at the University of British Columbia, Vancouver, Canada. On top of that he is an ARC DECRA Research Fellow, at the University of Tasmania, Australia. We talk about the ethics of predictive brain implants. You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:50 - What is a predictive brain implant?5:20 - What are we currently using predictive brain implants for?7:40 - The three types of predictive brain implant16:30 - Medical issues around brain implants18:45 - Predictive brain implants and autonomy22:40 - The effect of advisory implants on autonomy35:20 - The effect of automated implants on autonomy38:17 - Empirical findings on the experiences of patients47:00 - Possible future uses of PBIs51:25 - Dangers of speculative neuroethics  Relevant LinksFrédéric's homepageFrédéric's page at the University of Tasmania'A Threat to Autonomy? The Intrusion of Predictive Brain Implants' by Frédéric'Are Predictive Brain Implants an Indispensable Feature of Autonomy?' by Frédéric and Mark Cook'I Miss Being Me: Phenomenological Effects of Deep Brain Stimulation' by Fréderic and ors'The Tell-Tale Brain: The Effect of Predictive Brain Implants on Autonomy' by John Danaher'If and Then: A Critique of Speculative Nanoethics' by Alfred Nordmann    #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 New Podcast - Ep 1 Tal Zarsky on the Ethics of Big Data and Predictive Analytics | File Type: audio/mpeg | Duration: Unknown

I've started a new podcast as part of my Algocracy and Transhumanism project. The aim of the project is to ask three questions: How does technology create new governance structures, particularly algorithmic governance structures?How does technology create new governance subjects, particularly through the augmentation and enhancement of the human body?What implications does this have for our core political values such liberty, equality, privacy, transparency, accountability and so on? The first episode is now available. I interview Professor Tal Zarsky about the ethics of big data and predictive analytics. You can download here or listen below. I will add iTunes and Stitcher subscription information once I have received approval from both. Show Notes 0:00-2:00 - Introduction 2:00-12:00 - Defining Big Data, Data-Mining and Predictive Analytics 12:00-17:00 - Understanding a predictive analytics systems 17:00 - 21:30 - Could we ever have an intelligent, automated decision-making system? 21:30 - 29:30 - Evaluating algorithmic governance systems: efficiency and fairness 29:30 - 36:00 - Could algocratic systems be less biased? 36:00 - 42:00 - Wouldn't algocratic systems inherit the biases of programmers/society? 42:00 - 54:30 - The value of transparency in algocratic systems 54:30 - 1:00:1 - The gaming the system objection    LinksTal's SSRN profile page with links to his papersTal's profile page at the University of Haifa'The Real Privacy Problem' by Evgeny Morozov'Transparent Predictions' by Tal Zarsky'Automated Prediction: Perception Policy and Law' by Tal Zarsky'Understanding Discrimination in the Scored Society' by Tal ZarskyTal's taxonomy of objections to automated decision-making'The Logical Space of Algocracy' - by John Danaher'The Threat of Algocracy: Reality, Resistance and Accommodation' by John Danaher'Discrimination in Online Ad Delivery' by Latanya Seeney'The Hidden Biases of Big Data' by Kate Crawford'The Right to Privacy' by Warren and Brandeis (Harvard Law Review 1890)

 Episode #26 - Behan on Technopolitics and the Automation of the State | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Anthony Behan. Anthony is a technologist with an interest in the political and legal aspects of technology. We have a wide-ranging discussion about the automation of the law and the politics of technology.  The conversation is based on Anthony's thesis ‘The Politics of Technology: An Assessment of the Barriers to Law Enforcement Automation in Ireland’, (a link to which is available in the links section below). You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here). Show Notes0:00 - Introduction2:35 - The relationship between technology and humanity5:25 - Technology and the legitimacy of the state8:15 - Is the state a kind of technology?13:20 - Does technology have a political orientation?20:20 - Automated traffic monitoring as a case study24:40 - Studying automated traffic monitoring in Ireland30:30 - The mismatch between technology and legal procedure33:58 - Does technology create new forms of governance or does it just make old forms more efficient?39:40 - The problem of discretion43:45 - The feminist gap in the debate about the automation of the state49:15 - A mindful approach to automation53:00 - Postcolonialism and resistance to automation  Relevant LinksFollow Anthony on TwitterAnthony's Blog'The Politics of Technology: An Assessment of the Barriers to Law Enforcement Automation in Ireland' by Anthony Behan'The Politics of City Architecture' by Anthony BehanLewis MumfordJane JacobsRobert Moses  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #25 - McNamara on Fairness, Utility and High Frequency Trading | File Type: audio/mpeg | Duration: Unknown

In this episode I am joined by Steven McNamara. Steven is a Professor of Law at the American University of Beirut, and is currently a visiting professor at the University of Florida School of Law. Once upon a time, Steven was a corporate lawyer. He is now an academic lawyer with interests in moral theory, business ethics and technological change in financial markets. He also has a PhD in philosophy and wrote a dissertation on Kant’s use of Newtonian scientific method. We talk about the intersections between moral philosophy and high frequency trading, taking in the history of U.S. stock market in the process. You can download the episode here. You can listen below. You can also subscribe on Stitcher and iTunes. Show Notes0:00 - Introduction1:22 - The history of US stock markets7:45 - The (regulatory) creation of a national market13:10 - The origins of algorithmic trading18:15 - What is High Frequency Trading?21:30 - Does HFT 'rig' the market?33:47 - Does the technology pose any novel threats?40:30 - A utilitarian assessment of HFT: does it increase social welfare?48:00 - Rejecting the utilitarian approach50:30 - Fairness and reciprocity in HFT  Relevant LinksSteven McNamara's homepage at the University of Florida'The Law and Ethics of High Frequency Trading' by Steven McNamaraFlash Boys by Michael LewisDark Pools by Scott Patterson'Michael Lewis reflects on Flash Boys' by Michael Lewis'Moore's Law versus Murphy's Law: Algorithmic Trading and its Discontents' by Kirilenko and Lo'A Sociology of Algorithms: High Frequency Trading and the Shaping of Markets' by Donald MacKenzie #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #24 - Bryson on Why Robots Should Be Slaves | File Type: audio/mpeg | Duration: Unknown

In this episode I interview Joanna Bryson. Joanna is Reader in Computer Science at the University of Bath. Joanna’s primary research interest lies in using AI to understand natural intelligence, but she is also interested in the ethics of AI and robotics, the social uses of robots, and the political and legal implications of advances in robotics. In the latter field, she is probably best known for her article, published in 2010 entitled ‘Robots Should be Slaves’. We talk about the ideas and arguments contained in that paper as well as some related issues in roboethics. You can download the episode here or listen below. You can also subscribe on Stitcher or Itunes (or RSS). Show Notes0:00 - Introduction1:10 - Robots and Moral Subjects5:15 - The Possibility of Robot Moral Subjects10:30 - Is it bad to be emotionally attached to a robot?15:22 - Robots and legal/moral responsibility19:57 - The standards for human robot commanders22:22 - Are there some contexts in which we might want to create a person-like robot?26:10 - Can we stop people from creating person-like robots?28:00 - The principles that ought to guide robot design  Relevant LinksJoanna's Homepage'Robots should be Slaves' - by JoannaA Reddit 'Ask me Anything' with JoannaThe EPSRC Principles of RoboticsInterview with David Gunkel on Robots and CyborgsInterview with Hin Yan Liu on Robots and ResponsibilityHow to plug the robot responsibility gap #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #23 - Liu on Responsibility and Discrimination in Autonomous Weapons and Self-Driving Cars | File Type: audio/mpeg | Duration: Unknown

In this episode I talk to Hin-Yan Liu. Hin-Yan is an Associate Professor of Law at the University of Copenhagen. His research interests lie at the frontiers of emerging technology governance, and in the law and policy of existential risks. His core agenda focuses upon the myriad challenges posed by artificial intelligence (AI) and robotics regulation. We talk about responsibility gaps in the deployment of autonomous weapons and crash optimisation algorithms for self-driving cars. You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:03 - What is an autonomous weapon?4:14 - The responsibility gap in the autonomous weapons debate7:20 - The circumstantial responsibility gap13:44 - The conceptual responsibility gap21:00 - A tracing solution to the conceptual problem?27:47 - Should we use strict liability standards to plug the gap(s)?29:48 - What can we learn from the child soldiers debate33:02 - Crash optimisation algorithms for self-driving cars36:15 - Could self-driving cars give rise to structural discrimination?46:10 - Why it may not be easy to solve the structural discrimination problem49:35 - The Immunity Device Thought Experiment54:12 - Distinctions between the immunity device and other forms of insurance59:30 - What's missing from the self-driving car debate?  LinksHin-Yan's faculty webpageHin-Yan's academia.edu page'Autonomy in Weapons Systems' by Hin-Yan'Refining Responsibility: Differentiating Two Types of Responsibility Issues Raised by Autonomous Weapons Systems' by Hin-Yan'The Ethics of Crash Optimisation Algorithms' by John Danaher'The Ethics of Autonomous Cars' by Patrick LinInterview with Sven Nyholm about Trolley Problems and Self-Driving Cars #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

 Episode #22 - Wellman and Rajan on the Ethics of Automated Trading | File Type: audio/mpeg | Duration: Unknown

In this episode, I am joined by Michael Wellman and Uday Rajan. Michael is a Professor of Computer Science & Engineering at the University of Michigan; and Uday is a Professor of Business Administration and Chair and Professor of Finance and Real Estate at the same institution. Our conversation focuses on the ethics of autonomous trading agents on financial markets. We discuss algorithmic trading, high frequency trading, market manipulation, the AI control problem and more. You can download the episode here or listen below. You can also subscribe to the podcast on Stitcher or iTunes (here and here). Show Notes0:00 - Introduction2:20 - What is an autonomous trading agent and how prevalent are they?3:36 - High frequency trading as a type of autonomous trading5:36 - General uses of AI in financial trading6:45 - What are the social benefits of autonomous trading agents?10:10 - AI related scandals on financial markets (w/ comments on the 2010 Flash Crash)13:47 - Constructing an autonomous trading agent to engage in arbitrage operations14:44 - What is arbitrage?17:10 - Describing AI-based arbitrage on index securities24:30 - The advantages of using autonomous agents to do this27:20 - The ethical challenges of using autonomous agents to do this27:54 - Autonomous trading agents and spoofing transactions34:15 - Autonomous trading agents and other forms of market manipulation39:00 - How do we address the problems posed?42:40 - General lessons for the AI control problem Relevant LinksMichael Wellman's homepageUday Rajan's homepageMichael and Uday's paper 'Ethical Issues for Autonomous Trading Agents'The Flash Crash - WikipediaSEC Official Report on the Flash Crash'Yom Kippur War Tweet Prompts Higher Oil Prices' - Huffington PostBorussia Dortmund team bus bombingInterview with Anders Sandberg about time compression in computing

 Episode #21 - Mark Coeckelbergh on Robots and the Tragedy of Automation | File Type: audio/mpeg | Duration: Unknown

In this episode, I talk to Mark Coeckelbergh. Mark is a Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and President of the Society for Philosophy and Technology. He also has an affiliation as Professor of Technology and Social Responsibility at the Centre for Computing and Social Responsibility, De Montfort University, UK. We talk about robots and philosophy (robophilosophy), focusing on two topics in particular. First, the rise of the carebots and the mechanisation of society, and second, Hegel's master-slave dialectic and its application to our relationship with technology. You can download the episode here. You can also listen below or subscribe on Stitcher and iTunes (via RSS) or here. Show Notes0:00 - Introduction2:00 - What is a robot?3:30 - What is robophilosophy? Why is it important?4:45 - The phenomenological approach to roboethics6:48 - What are carebots? Why do people advocate their use?8:40 - Ethical objections to the use of carebots11:20 - Could a robot ever care for us?13:25 - Carebots and the Problem of Emotional Deception18:16 - Robots, modernity and the mechanisation of society21:50 - The Master-Slave Dialectic in Human-Robot Relationships25:17 - Robots and our increasing alienation from reality30:40 - Technology and the automation of human beings  Relevant LinksMark's homepageHuman Being @Risk by Mark CoeckelberghNew Romantic Cyborgs by Mark Coeckelbergh'Artificial agents, good care and modernity' by Mark Coeckelbergh'The tragedy of the master: automation, vulnerability and distance' by Mark Coeckelbergh'The Carebot Dystopia: an Analysis' by John DanaherHegel's Master-Slave Dialectic - explained on the Internet Encyclopedia of Philosophy

Comments

Login or signup comment.