By Lambert Strether of Corrente.
To start, I had wanted to give you the origin for the post title, which is a famous parable in marketing, but I can’t find an attested, authoritative source (), so I’ll settle for the oldest example I can find, 2004’s Publishing Confidential: An Insider’s Guide to What it Really Takes to Land a Non-Fiction Book Deal, by Paul Brown, where two would-be authors have just focus-grouped their manuscript:
If you are writing a service book, and potential readers tell you the title is awful, they want more callouts and checklists, and they wouldn’t mind if the book were completely modular so they could concentrate on the stuff that they thought would help them and be able to skip everything else, then you might want to listen.
As [my co-author] told me at the end of the focus group, when we were scrambling to find anything positive that had come out of the experience: “If the dogs won’t eat the dog food, it is bad dog food.”
From there, the phrase seems to have migrated to the business school/venture capital nexus (), for obvious reasons, and thence, for really obvious reasons, to the world of political operatives and pundits ().
Now, self-driving cars are, so we are told, an inevitability, and — like Strong AI or the Paperless Office — the next big thing (though to be fair, there’s been to the relentless PR in recent weeks). It’s also worth noting that David Plouffe, Obama’s campaign manager in 2008, , that the Obama administration just issued for self-driving cars, and that any number of Democrat (and Republican) operatives are doubtless preparing to help Silicon Valley smooth away tiresome bureacratic obstacles (and allocate any coming infrastructure monies). A more-or-less random selection of recent technological triumphalism:
- The Verge
- Global News
- Wharton School
- New York Times
Fascinatingly, all this breathless coverage assumes that self-driving cars are a thing, as opposed to a thing that might or might not one day be. And in my Ahab-like pursuit of the bezzle, I’ve been doing a good deal of reading up, trying to figure out if the tech for “autonomous vehicles” is truly there, what the business models for selling self-driving cars might be (if indeed they are to be sold, as opposed to being rented), effects on political economy (income inequality, public works, insurance), incremental approaches (trucks on highways first), and social benefits (for example, lives saved). Those posts are coming soon, but not now. In this post, I want to focus on the question of whether consumers, in the marketplace, will accept self-driving cars at all. That is, will the dogs eat the dog food?
That’s where “The Trolley Problem” comes in. Here is the obligatory image ():
And here’s an explanation (from ; for some reason, the “Trolley Problem” had a moment in 2016, and then went away, as so many problems do):
In a simple formulation of the Trolley Problem, we imagine a trolley hurtling toward a cluster of five people who are standing on the track and facing certain death. By throwing a switch, an observer can divert the trolley to a different track where one person is standing, currently out of harm’s way but certain to die because of the observer’s actions.
Should the observer throw the switch — cutting the death toll from five to one? That is the “utilitarian” argument, which many people find persuasive. The obvious problem is, it puts the observer in the position of playing God — deciding who lives and who dies.
So, would you throw the switch? Gruesome, eh?
Altnough I don’t drive, I can see that the Trolley problem is a potential problem inherent to driving, as when a driver must decide whether to swerve to avoid hitting the five people, even if at the expense of the one, and that the algorithmic “driver” of an “autonomous” vehicle — were there to be such a thing — would have to make the same sort of “decision” (or be programmed to avoid making it, which is another decision).
Of course, one can — indeed, probably should — on the entire Trolley Problem, because it abstracts away from social relations. For example, suppose the “one person” were the academic ethicist who gave you this Sophie’s Choice, who put you in this impossible dilemma. Would that make throwing the switch easier? Or suppose one of the “cluster of five people” was Baby Hitler. Would that make throwing the switch easier? Or suppose we put ObamaCare into the frame of the Trolley Problem: Aren’t the “observers” — a curiously neutral term, come to think of it — equivalent to the policy makers who send some people to HappyVille, and others to Pain City, randomly? Wouldn’t a good measure of the just society be that it minimizes Trolley Problems altogether, instead of tasking meritocrats with devising the best algorithmical “solution” for them?
But even if we accept that the hard case of the Trolley Problem can make good ethical algorithms, three issues remain. First, with us on the question of throwing the switch:
[I]n a survey of professional philosophers on the Trolley Problem, 68.2% agreed, saying that one should pull the lever. So maybe this ‘problem’ isn’t a problem at all and the answer is to simply do the Utilitarian thing that ‘greatest happiness to the greatest number.’
Trivially, if we are to remake the American transportation system, do we need more than a supermajority of professional philosophers to determine its ethical foundations? Less trivially, what about the 31.8% who are going to have the switch thrown on them whether they like it or not?
Second, is the question of throwing the switch really one that we want to leave to ?
[C]an you imagine a world in which say Google or Apple places a value on each of our lives, which could be used at any moment of time to turn a car into us to save others? Would you be okay with that?
Yes, I can, and especially when I use Apple or Google’s increasingly crapified software.
John Bonnefon, a psychological scientist working at France’s National Center for Scientific Research, told me there is no historical precedent that applies to the study of self-driving ethics. ‘It is the very first time that we may massively and daily engage with an object that is programmed to kill us in specific circumstances. Trains do not self-destruct, no more than planes or elevators do. We may be afraid of plane crashes, but we know at least that they are due to mistakes or ill intent. In other words, we are used to self-destruction being a bug, not a feature.’
At this point, we might remember social relations once again and reflect that — rather like Pre✓®, with whose “$85 membership, you can speed through security” — there will doubtless be ways to buy yourself out of the operations of the Trolley Problem algorithm altogether. Plenty of historical precedent for that!
Fortunately, we don’t need to answer any of these questions to know that the Trolley Problem is real. Given that reality, are self-driving cars marketable? Will consumers buy them? A study from Jean-François Bonnefon, Azim Sharif, and Iyad Rahwan in < () suggests that the answer is no, in their article “The social dilemma of autonomous vehicles”: Here is the abstract:
Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils—for example, running over pedestrians or sacrificing itself and its passenger to save them. Defining the algorithms that will help AVs make
these moral decisions is a formidable challenge. We found that participants to six MTurk studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and . They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV.
So it seems that the dogs won’t eat the dog food. Whatever choice the ethical algorithm controlling the self-driving car is to make, how do you market it? It’s hard to imagine what the advertising slogan would be — I’m imagining YouTubes for all these scenarios, here — for a car that puts your baby’s head through the windshield to save (say) five nuns. But the slogan for a car that flattens five nuns to save your baby isn’t that easy to imagine, either. And yet the advertising slogan for a car that isn’t programmed to protect your baby or the nuns also seems problematic. Perhaps there could be an emergency switch that lets the driver take back control. But then the vehicle isn’t really autonomous at all, is it?. Perhaps the real ethical problem was removing the driver’s autonomy in the first place…
Bonneton et al. conclude:
Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today. [who?] [There Is No Alternative], taking algorithmic morality seriously has never been more urgent. Our data-driven approach highlights how the field of experimental ethics can give us key insights into the moral, cultural and legal standards that people expect from autonomous driving algorithms. For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest, let alone across different cultures with different moral attitudes to life-life tradeoffs —but public opinion and social pressure may very well shift as this conversation progresses.
Ah. A “conversation.” I told you the Democrats were involved. It is bad dog food, then. Perhaps the should be deployed.
 I would have thought it’s just a little early in the marketing cycle to propose appropriating public goods, but then again Always Be Closing.
 If the field were anything other than professional philosophy, I would want to know who funds the dominant 68.2%.
 Of course, the switch could be the automotive equivalent of in an elevator.