I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I com...
Developers are constantly testing how online users react to their designs. Will they stay longer on the site because of this shade of blue? Will they get depressed if we show them depressing social media posts? What happens if we intentionally mismatch people on our dating website? When it comes to shades of blue, perhaps that’s not a big deal. But when it comes to mental health and deceiving people? Now we’re in ethically choppy waters. My discussion today is with Cennydd Bowles, Managing Director of NowNext, where he helps organizations develop ethically sound products. He’s also the author of a book called “Future Ethics.” He argues that A/B testing on people is often ethically wrong and creates a culture among developers of a willingness to manipulate people. Great conversation ranging from the ethics of experimentation to marketing and even to capitalism.
--------
48:25
AI Risk Mitigation is Insanely Complex
There’s a picture in our heads that’s overly simplistic and the result is not thinking clearly about AI risks. Our simplistic picture is that a team develops AI and then it gets used. The truth, the more complex picture, is that 1000 hands touch that AI before it ever becomes a product. This means that risk identification and mitigation is spread across a very complex supply chain. My guest, Jason Stanley, is at the forefront of research and application when it comes to managing all this complexity
--------
39:57
Did You Say "Quantum" Computer?
From the best of season 1: Microsoft recently announced an (alleged!) breakthrough in quantum computing. But what in the world is quantum computer, what can they do, and what are the potential ethical implications of this new powerful tech?Brian and I discuss these issues and more. And don’t worry! No knowledge of physics required.
--------
45:50
What Psychologists Say About AI Relationships
Every specialist in anything thinks they should have a seat at the AI ethics table. I’m usually skeptical. But psychologist Madeline Reinecke, Ph.D. did a great job defending her view that – you guessed it – psychologists should have a seat at the AI ethics table. Our conversation ranged from the role of psychologists in creating AI that supports healthy human relationships to when children start and stop attributing sentience to robots to loving relationships with AI to the threat of AI-induced self-absorption. I guess I need to have more psychologists on the show.
--------
41:56
Am I Wrong About Agentic AI?
A fun format for this episode. In Part I, I talk about how I see agentic AI unfolding and what ethical, social, and political risks come with it. In part II, Eric Corriel, digital strategist at the School of Visual Arts and a close friend, tells me why he thinks I’m wrong. Debate ensues.
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.